00:00:00.003 Started by upstream project "autotest-per-patch" build number 132004 00:00:00.003 originally caused by: 00:00:00.003 Started by user sys_sgci 00:00:00.060 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.061 The recommended git tool is: git 00:00:00.061 using credential 00000000-0000-0000-0000-000000000002 00:00:00.064 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.095 Fetching changes from the remote Git repository 00:00:00.097 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.157 Using shallow fetch with depth 1 00:00:00.157 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.157 > git --version # timeout=10 00:00:00.199 > git --version # 'git version 2.39.2' 00:00:00.199 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.239 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.239 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.134 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.146 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.159 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:06.159 > git config core.sparsecheckout # timeout=10 00:00:06.171 > git read-tree -mu HEAD # timeout=10 00:00:06.187 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:06.208 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:06.208 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:06.319 [Pipeline] Start of Pipeline 00:00:06.334 [Pipeline] library 00:00:06.336 Loading library shm_lib@master 00:00:07.580 Library shm_lib@master is cached. Copying from home. 00:00:07.621 [Pipeline] node 00:00:07.739 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.744 [Pipeline] { 00:00:07.759 [Pipeline] catchError 00:00:07.761 [Pipeline] { 00:00:07.781 [Pipeline] wrap 00:00:07.791 [Pipeline] { 00:00:07.801 [Pipeline] stage 00:00:07.805 [Pipeline] { (Prologue) 00:00:08.014 [Pipeline] sh 00:00:08.302 + logger -p user.info -t JENKINS-CI 00:00:08.318 [Pipeline] echo 00:00:08.319 Node: CYP9 00:00:08.325 [Pipeline] sh 00:00:08.648 [Pipeline] setCustomBuildProperty 00:00:08.695 [Pipeline] echo 00:00:08.715 Cleanup processes 00:00:08.733 [Pipeline] sh 00:00:09.024 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.024 1315699 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.039 [Pipeline] sh 00:00:09.329 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.329 ++ grep -v 'sudo pgrep' 00:00:09.329 ++ awk '{print $1}' 00:00:09.329 + sudo kill -9 00:00:09.329 + true 00:00:09.346 [Pipeline] cleanWs 00:00:09.357 [WS-CLEANUP] Deleting project workspace... 00:00:09.357 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.365 [WS-CLEANUP] done 00:00:09.369 [Pipeline] setCustomBuildProperty 00:00:09.383 [Pipeline] sh 00:00:09.670 + sudo git config --global --replace-all safe.directory '*' 00:00:09.748 [Pipeline] httpRequest 00:00:11.852 [Pipeline] echo 00:00:11.854 Sorcerer 10.211.164.101 is alive 00:00:11.862 [Pipeline] retry 00:00:11.864 [Pipeline] { 00:00:11.874 [Pipeline] httpRequest 00:00:11.878 HttpMethod: GET 00:00:11.879 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:11.879 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:11.908 Response Code: HTTP/1.1 200 OK 00:00:11.908 Success: Status code 200 is in the accepted range: 200,404 00:00:11.908 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:33.964 [Pipeline] } 00:00:33.982 [Pipeline] // retry 00:00:33.990 [Pipeline] sh 00:00:34.278 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:34.297 [Pipeline] httpRequest 00:00:34.677 [Pipeline] echo 00:00:34.679 Sorcerer 10.211.164.101 is alive 00:00:34.689 [Pipeline] retry 00:00:34.691 [Pipeline] { 00:00:34.706 [Pipeline] httpRequest 00:00:34.711 HttpMethod: GET 00:00:34.712 URL: http://10.211.164.101/packages/spdk_c3ade7c9c2825a4f8826f8a47e2df7beed925e53.tar.gz 00:00:34.712 Sending request to url: http://10.211.164.101/packages/spdk_c3ade7c9c2825a4f8826f8a47e2df7beed925e53.tar.gz 00:00:34.727 Response Code: HTTP/1.1 200 OK 00:00:34.727 Success: Status code 200 is in the accepted range: 200,404 00:00:34.728 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c3ade7c9c2825a4f8826f8a47e2df7beed925e53.tar.gz 00:01:21.021 [Pipeline] } 00:01:21.038 [Pipeline] // retry 00:01:21.045 [Pipeline] sh 00:01:21.335 + tar --no-same-owner -xf spdk_c3ade7c9c2825a4f8826f8a47e2df7beed925e53.tar.gz 00:01:24.654 [Pipeline] sh 00:01:24.942 + git -C spdk log --oneline -n5 00:01:24.943 c3ade7c9c nvme/nvme: Factor out submit_request function 00:01:24.943 1f58a2f77 accel/mlx5: Factor out task submissions 00:01:24.943 7e535eb70 nvme/rdma: Remove qpair::max_recv_sge as unused 00:01:24.943 0a1fe8414 nvme/rdma: Add likely/unlikely to IO path 00:01:24.943 5e254ac5b nvme/rdma: Factor our contig request preparation 00:01:24.955 [Pipeline] } 00:01:24.969 [Pipeline] // stage 00:01:24.978 [Pipeline] stage 00:01:24.980 [Pipeline] { (Prepare) 00:01:24.997 [Pipeline] writeFile 00:01:25.013 [Pipeline] sh 00:01:25.300 + logger -p user.info -t JENKINS-CI 00:01:25.311 [Pipeline] sh 00:01:25.595 + logger -p user.info -t JENKINS-CI 00:01:25.608 [Pipeline] sh 00:01:25.893 + cat autorun-spdk.conf 00:01:25.893 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.893 SPDK_TEST_NVMF=1 00:01:25.893 SPDK_TEST_NVME_CLI=1 00:01:25.893 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.893 SPDK_TEST_NVMF_NICS=e810 00:01:25.893 SPDK_TEST_VFIOUSER=1 00:01:25.893 SPDK_RUN_UBSAN=1 00:01:25.893 NET_TYPE=phy 00:01:25.902 RUN_NIGHTLY=0 00:01:25.908 [Pipeline] readFile 00:01:25.933 [Pipeline] withEnv 00:01:25.935 [Pipeline] { 00:01:25.947 [Pipeline] sh 00:01:26.237 + set -ex 00:01:26.237 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:26.237 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:26.237 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.237 ++ SPDK_TEST_NVMF=1 00:01:26.237 ++ SPDK_TEST_NVME_CLI=1 00:01:26.237 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.237 ++ SPDK_TEST_NVMF_NICS=e810 00:01:26.237 ++ SPDK_TEST_VFIOUSER=1 00:01:26.237 ++ SPDK_RUN_UBSAN=1 00:01:26.237 ++ NET_TYPE=phy 00:01:26.237 ++ RUN_NIGHTLY=0 00:01:26.237 + case $SPDK_TEST_NVMF_NICS in 00:01:26.237 + DRIVERS=ice 00:01:26.237 + [[ tcp == \r\d\m\a ]] 00:01:26.237 + [[ -n ice ]] 00:01:26.237 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:26.237 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:26.237 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:26.237 rmmod: ERROR: Module irdma is not currently loaded 00:01:26.237 rmmod: ERROR: Module i40iw is not currently loaded 00:01:26.237 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:26.237 + true 00:01:26.237 + for D in $DRIVERS 00:01:26.237 + sudo modprobe ice 00:01:26.237 + exit 0 00:01:26.247 [Pipeline] } 00:01:26.262 [Pipeline] // withEnv 00:01:26.268 [Pipeline] } 00:01:26.281 [Pipeline] // stage 00:01:26.290 [Pipeline] catchError 00:01:26.292 [Pipeline] { 00:01:26.307 [Pipeline] timeout 00:01:26.307 Timeout set to expire in 1 hr 0 min 00:01:26.308 [Pipeline] { 00:01:26.322 [Pipeline] stage 00:01:26.324 [Pipeline] { (Tests) 00:01:26.338 [Pipeline] sh 00:01:26.628 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.628 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.628 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.628 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:26.628 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:26.628 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:26.628 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:26.628 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:26.628 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:26.628 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:26.628 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:26.628 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.628 + source /etc/os-release 00:01:26.628 ++ NAME='Fedora Linux' 00:01:26.628 ++ VERSION='39 (Cloud Edition)' 00:01:26.628 ++ ID=fedora 00:01:26.628 ++ VERSION_ID=39 00:01:26.628 ++ VERSION_CODENAME= 00:01:26.628 ++ PLATFORM_ID=platform:f39 00:01:26.628 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:26.628 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:26.628 ++ LOGO=fedora-logo-icon 00:01:26.628 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:26.628 ++ HOME_URL=https://fedoraproject.org/ 00:01:26.628 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:26.628 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:26.628 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:26.628 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:26.628 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:26.628 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:26.628 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:26.628 ++ SUPPORT_END=2024-11-12 00:01:26.628 ++ VARIANT='Cloud Edition' 00:01:26.628 ++ VARIANT_ID=cloud 00:01:26.628 + uname -a 00:01:26.628 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:26.628 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:29.935 Hugepages 00:01:29.935 node hugesize free / total 00:01:29.935 node0 1048576kB 0 / 0 00:01:29.935 node0 2048kB 0 / 0 00:01:29.935 node1 1048576kB 0 / 0 00:01:29.935 node1 2048kB 0 / 0 00:01:29.935 00:01:29.935 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:29.935 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:29.935 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:29.935 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:29.935 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:29.935 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:29.935 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:29.935 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:29.935 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:29.935 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:29.935 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:29.935 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:29.935 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:29.935 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:29.935 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:29.935 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:29.935 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:29.935 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:29.935 + rm -f /tmp/spdk-ld-path 00:01:29.935 + source autorun-spdk.conf 00:01:29.935 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.935 ++ SPDK_TEST_NVMF=1 00:01:29.935 ++ SPDK_TEST_NVME_CLI=1 00:01:29.935 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.935 ++ SPDK_TEST_NVMF_NICS=e810 00:01:29.935 ++ SPDK_TEST_VFIOUSER=1 00:01:29.935 ++ SPDK_RUN_UBSAN=1 00:01:29.935 ++ NET_TYPE=phy 00:01:29.935 ++ RUN_NIGHTLY=0 00:01:29.935 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:29.935 + [[ -n '' ]] 00:01:29.935 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:29.935 + for M in /var/spdk/build-*-manifest.txt 00:01:29.935 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:29.935 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:29.935 + for M in /var/spdk/build-*-manifest.txt 00:01:29.935 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:29.935 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:29.935 + for M in /var/spdk/build-*-manifest.txt 00:01:29.935 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:29.935 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:29.935 ++ uname 00:01:29.935 + [[ Linux == \L\i\n\u\x ]] 00:01:29.935 + sudo dmesg -T 00:01:29.935 + sudo dmesg --clear 00:01:29.935 + dmesg_pid=1316787 00:01:29.935 + [[ Fedora Linux == FreeBSD ]] 00:01:29.935 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.935 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.935 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:29.935 + [[ -x /usr/src/fio-static/fio ]] 00:01:29.935 + export FIO_BIN=/usr/src/fio-static/fio 00:01:29.935 + FIO_BIN=/usr/src/fio-static/fio 00:01:29.935 + sudo dmesg -Tw 00:01:29.935 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:29.935 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:29.935 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:29.935 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.935 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.935 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:29.935 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.935 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.935 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:29.935 Test configuration: 00:01:29.935 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.935 SPDK_TEST_NVMF=1 00:01:29.935 SPDK_TEST_NVME_CLI=1 00:01:29.935 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.935 SPDK_TEST_NVMF_NICS=e810 00:01:29.935 SPDK_TEST_VFIOUSER=1 00:01:29.935 SPDK_RUN_UBSAN=1 00:01:29.935 NET_TYPE=phy 00:01:29.935 RUN_NIGHTLY=0 12:06:04 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:29.935 12:06:04 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:29.935 12:06:04 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:29.935 12:06:04 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:29.935 12:06:04 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:29.935 12:06:04 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:29.935 12:06:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.935 12:06:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.935 12:06:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.935 12:06:04 -- paths/export.sh@5 -- $ export PATH 00:01:29.935 12:06:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.935 12:06:04 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:29.935 12:06:04 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:29.935 12:06:04 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730718364.XXXXXX 00:01:29.935 12:06:04 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730718364.6jqrwa 00:01:29.935 12:06:04 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:29.935 12:06:04 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:29.935 12:06:04 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:29.935 12:06:04 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:29.935 12:06:04 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:29.935 12:06:04 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:29.935 12:06:04 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:29.935 12:06:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.935 12:06:04 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:29.935 12:06:04 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:29.935 12:06:04 -- pm/common@17 -- $ local monitor 00:01:29.935 12:06:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.935 12:06:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.935 12:06:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.935 12:06:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.935 12:06:04 -- pm/common@21 -- $ date +%s 00:01:29.935 12:06:04 -- pm/common@25 -- $ sleep 1 00:01:29.935 12:06:04 -- pm/common@21 -- $ date +%s 00:01:29.935 12:06:04 -- pm/common@21 -- $ date +%s 00:01:29.935 12:06:04 -- pm/common@21 -- $ date +%s 00:01:29.936 12:06:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730718364 00:01:29.936 12:06:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730718364 00:01:29.936 12:06:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730718364 00:01:29.936 12:06:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730718364 00:01:30.198 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730718364_collect-cpu-temp.pm.log 00:01:30.198 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730718364_collect-cpu-load.pm.log 00:01:30.198 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730718364_collect-vmstat.pm.log 00:01:30.198 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730718364_collect-bmc-pm.bmc.pm.log 00:01:31.140 12:06:05 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:31.140 12:06:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:31.140 12:06:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:31.140 12:06:05 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:31.140 12:06:05 -- spdk/autobuild.sh@16 -- $ date -u 00:01:31.140 Mon Nov 4 11:06:05 AM UTC 2024 00:01:31.140 12:06:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:31.140 v25.01-pre-79-gc3ade7c9c 00:01:31.140 12:06:05 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:31.140 12:06:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:31.140 12:06:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:31.140 12:06:05 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:31.140 12:06:05 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:31.140 12:06:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.140 ************************************ 00:01:31.140 START TEST ubsan 00:01:31.140 ************************************ 00:01:31.140 12:06:05 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:31.140 using ubsan 00:01:31.140 00:01:31.140 real 0m0.001s 00:01:31.140 user 0m0.000s 00:01:31.140 sys 0m0.001s 00:01:31.140 12:06:05 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:31.140 12:06:05 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:31.140 ************************************ 00:01:31.140 END TEST ubsan 00:01:31.140 ************************************ 00:01:31.140 12:06:05 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:31.140 12:06:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:31.140 12:06:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:31.140 12:06:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:31.140 12:06:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:31.140 12:06:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:31.140 12:06:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:31.140 12:06:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:31.140 12:06:05 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:31.400 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:31.400 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:31.661 Using 'verbs' RDMA provider 00:01:47.512 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:59.744 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:59.744 Creating mk/config.mk...done. 00:01:59.744 Creating mk/cc.flags.mk...done. 00:01:59.744 Type 'make' to build. 00:01:59.744 12:06:34 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:59.744 12:06:34 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:59.744 12:06:34 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:59.744 12:06:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:59.744 ************************************ 00:01:59.744 START TEST make 00:01:59.744 ************************************ 00:01:59.744 12:06:34 make -- common/autotest_common.sh@1125 -- $ make -j144 00:02:00.007 make[1]: Nothing to be done for 'all'. 00:02:01.397 The Meson build system 00:02:01.397 Version: 1.5.0 00:02:01.397 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:01.397 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:01.397 Build type: native build 00:02:01.397 Project name: libvfio-user 00:02:01.397 Project version: 0.0.1 00:02:01.397 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:01.397 C linker for the host machine: cc ld.bfd 2.40-14 00:02:01.397 Host machine cpu family: x86_64 00:02:01.397 Host machine cpu: x86_64 00:02:01.397 Run-time dependency threads found: YES 00:02:01.397 Library dl found: YES 00:02:01.397 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:01.397 Run-time dependency json-c found: YES 0.17 00:02:01.397 Run-time dependency cmocka found: YES 1.1.7 00:02:01.397 Program pytest-3 found: NO 00:02:01.397 Program flake8 found: NO 00:02:01.397 Program misspell-fixer found: NO 00:02:01.397 Program restructuredtext-lint found: NO 00:02:01.397 Program valgrind found: YES (/usr/bin/valgrind) 00:02:01.397 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:01.397 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:01.397 Compiler for C supports arguments -Wwrite-strings: YES 00:02:01.397 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:01.397 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:01.397 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:01.397 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:01.397 Build targets in project: 8 00:02:01.397 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:01.397 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:01.397 00:02:01.397 libvfio-user 0.0.1 00:02:01.398 00:02:01.398 User defined options 00:02:01.398 buildtype : debug 00:02:01.398 default_library: shared 00:02:01.398 libdir : /usr/local/lib 00:02:01.398 00:02:01.398 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:01.657 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:01.917 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:01.917 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:01.917 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:01.917 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:01.917 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:01.917 [6/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:01.917 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:01.917 [8/37] Compiling C object samples/null.p/null.c.o 00:02:01.917 [9/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:01.917 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:01.917 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:01.917 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:01.917 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:01.917 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:01.917 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:01.917 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:01.917 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:01.917 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:01.917 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:01.917 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:01.917 [21/37] Compiling C object samples/server.p/server.c.o 00:02:01.917 [22/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:01.917 [23/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:01.917 [24/37] Compiling C object samples/client.p/client.c.o 00:02:01.917 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:01.917 [26/37] Linking target samples/client 00:02:01.917 [27/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:01.917 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:01.917 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:01.917 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:01.917 [31/37] Linking target test/unit_tests 00:02:02.178 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:02.178 [33/37] Linking target samples/gpio-pci-idio-16 00:02:02.179 [34/37] Linking target samples/shadow_ioeventfd_server 00:02:02.179 [35/37] Linking target samples/lspci 00:02:02.179 [36/37] Linking target samples/null 00:02:02.179 [37/37] Linking target samples/server 00:02:02.179 INFO: autodetecting backend as ninja 00:02:02.179 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:02.179 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:02.753 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:02.753 ninja: no work to do. 00:02:08.048 The Meson build system 00:02:08.048 Version: 1.5.0 00:02:08.048 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:08.048 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:08.048 Build type: native build 00:02:08.048 Program cat found: YES (/usr/bin/cat) 00:02:08.048 Project name: DPDK 00:02:08.048 Project version: 24.03.0 00:02:08.048 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:08.048 C linker for the host machine: cc ld.bfd 2.40-14 00:02:08.048 Host machine cpu family: x86_64 00:02:08.048 Host machine cpu: x86_64 00:02:08.048 Message: ## Building in Developer Mode ## 00:02:08.048 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:08.048 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:08.048 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:08.048 Program python3 found: YES (/usr/bin/python3) 00:02:08.048 Program cat found: YES (/usr/bin/cat) 00:02:08.048 Compiler for C supports arguments -march=native: YES 00:02:08.048 Checking for size of "void *" : 8 00:02:08.048 Checking for size of "void *" : 8 (cached) 00:02:08.048 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:08.048 Library m found: YES 00:02:08.048 Library numa found: YES 00:02:08.048 Has header "numaif.h" : YES 00:02:08.048 Library fdt found: NO 00:02:08.048 Library execinfo found: NO 00:02:08.048 Has header "execinfo.h" : YES 00:02:08.048 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:08.048 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:08.048 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:08.048 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:08.048 Run-time dependency openssl found: YES 3.1.1 00:02:08.048 Run-time dependency libpcap found: YES 1.10.4 00:02:08.048 Has header "pcap.h" with dependency libpcap: YES 00:02:08.048 Compiler for C supports arguments -Wcast-qual: YES 00:02:08.048 Compiler for C supports arguments -Wdeprecated: YES 00:02:08.048 Compiler for C supports arguments -Wformat: YES 00:02:08.048 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:08.048 Compiler for C supports arguments -Wformat-security: NO 00:02:08.048 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:08.048 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:08.048 Compiler for C supports arguments -Wnested-externs: YES 00:02:08.048 Compiler for C supports arguments -Wold-style-definition: YES 00:02:08.048 Compiler for C supports arguments -Wpointer-arith: YES 00:02:08.048 Compiler for C supports arguments -Wsign-compare: YES 00:02:08.048 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:08.048 Compiler for C supports arguments -Wundef: YES 00:02:08.049 Compiler for C supports arguments -Wwrite-strings: YES 00:02:08.049 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:08.049 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:08.049 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:08.049 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:08.049 Program objdump found: YES (/usr/bin/objdump) 00:02:08.049 Compiler for C supports arguments -mavx512f: YES 00:02:08.049 Checking if "AVX512 checking" compiles: YES 00:02:08.049 Fetching value of define "__SSE4_2__" : 1 00:02:08.049 Fetching value of define "__AES__" : 1 00:02:08.049 Fetching value of define "__AVX__" : 1 00:02:08.049 Fetching value of define "__AVX2__" : 1 00:02:08.049 Fetching value of define "__AVX512BW__" : 1 00:02:08.049 Fetching value of define "__AVX512CD__" : 1 00:02:08.049 Fetching value of define "__AVX512DQ__" : 1 00:02:08.049 Fetching value of define "__AVX512F__" : 1 00:02:08.049 Fetching value of define "__AVX512VL__" : 1 00:02:08.049 Fetching value of define "__PCLMUL__" : 1 00:02:08.049 Fetching value of define "__RDRND__" : 1 00:02:08.049 Fetching value of define "__RDSEED__" : 1 00:02:08.049 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:08.049 Fetching value of define "__znver1__" : (undefined) 00:02:08.049 Fetching value of define "__znver2__" : (undefined) 00:02:08.049 Fetching value of define "__znver3__" : (undefined) 00:02:08.049 Fetching value of define "__znver4__" : (undefined) 00:02:08.049 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:08.049 Message: lib/log: Defining dependency "log" 00:02:08.049 Message: lib/kvargs: Defining dependency "kvargs" 00:02:08.049 Message: lib/telemetry: Defining dependency "telemetry" 00:02:08.049 Checking for function "getentropy" : NO 00:02:08.049 Message: lib/eal: Defining dependency "eal" 00:02:08.049 Message: lib/ring: Defining dependency "ring" 00:02:08.049 Message: lib/rcu: Defining dependency "rcu" 00:02:08.049 Message: lib/mempool: Defining dependency "mempool" 00:02:08.049 Message: lib/mbuf: Defining dependency "mbuf" 00:02:08.049 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:08.049 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:08.049 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:08.049 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:08.049 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:08.049 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:08.049 Compiler for C supports arguments -mpclmul: YES 00:02:08.049 Compiler for C supports arguments -maes: YES 00:02:08.049 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:08.049 Compiler for C supports arguments -mavx512bw: YES 00:02:08.049 Compiler for C supports arguments -mavx512dq: YES 00:02:08.049 Compiler for C supports arguments -mavx512vl: YES 00:02:08.049 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:08.049 Compiler for C supports arguments -mavx2: YES 00:02:08.049 Compiler for C supports arguments -mavx: YES 00:02:08.049 Message: lib/net: Defining dependency "net" 00:02:08.049 Message: lib/meter: Defining dependency "meter" 00:02:08.049 Message: lib/ethdev: Defining dependency "ethdev" 00:02:08.049 Message: lib/pci: Defining dependency "pci" 00:02:08.049 Message: lib/cmdline: Defining dependency "cmdline" 00:02:08.049 Message: lib/hash: Defining dependency "hash" 00:02:08.049 Message: lib/timer: Defining dependency "timer" 00:02:08.049 Message: lib/compressdev: Defining dependency "compressdev" 00:02:08.049 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:08.049 Message: lib/dmadev: Defining dependency "dmadev" 00:02:08.049 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:08.049 Message: lib/power: Defining dependency "power" 00:02:08.049 Message: lib/reorder: Defining dependency "reorder" 00:02:08.049 Message: lib/security: Defining dependency "security" 00:02:08.049 Has header "linux/userfaultfd.h" : YES 00:02:08.049 Has header "linux/vduse.h" : YES 00:02:08.049 Message: lib/vhost: Defining dependency "vhost" 00:02:08.049 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:08.049 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:08.049 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:08.049 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:08.049 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:08.049 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:08.049 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:08.049 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:08.049 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:08.049 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:08.049 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:08.049 Configuring doxy-api-html.conf using configuration 00:02:08.049 Configuring doxy-api-man.conf using configuration 00:02:08.049 Program mandb found: YES (/usr/bin/mandb) 00:02:08.049 Program sphinx-build found: NO 00:02:08.049 Configuring rte_build_config.h using configuration 00:02:08.049 Message: 00:02:08.049 ================= 00:02:08.049 Applications Enabled 00:02:08.049 ================= 00:02:08.049 00:02:08.049 apps: 00:02:08.049 00:02:08.049 00:02:08.049 Message: 00:02:08.049 ================= 00:02:08.049 Libraries Enabled 00:02:08.049 ================= 00:02:08.049 00:02:08.049 libs: 00:02:08.049 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:08.049 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:08.049 cryptodev, dmadev, power, reorder, security, vhost, 00:02:08.049 00:02:08.049 Message: 00:02:08.049 =============== 00:02:08.049 Drivers Enabled 00:02:08.049 =============== 00:02:08.049 00:02:08.049 common: 00:02:08.049 00:02:08.049 bus: 00:02:08.049 pci, vdev, 00:02:08.049 mempool: 00:02:08.049 ring, 00:02:08.049 dma: 00:02:08.049 00:02:08.049 net: 00:02:08.049 00:02:08.049 crypto: 00:02:08.049 00:02:08.049 compress: 00:02:08.049 00:02:08.049 vdpa: 00:02:08.049 00:02:08.049 00:02:08.049 Message: 00:02:08.049 ================= 00:02:08.049 Content Skipped 00:02:08.049 ================= 00:02:08.049 00:02:08.049 apps: 00:02:08.049 dumpcap: explicitly disabled via build config 00:02:08.049 graph: explicitly disabled via build config 00:02:08.049 pdump: explicitly disabled via build config 00:02:08.049 proc-info: explicitly disabled via build config 00:02:08.049 test-acl: explicitly disabled via build config 00:02:08.049 test-bbdev: explicitly disabled via build config 00:02:08.049 test-cmdline: explicitly disabled via build config 00:02:08.049 test-compress-perf: explicitly disabled via build config 00:02:08.049 test-crypto-perf: explicitly disabled via build config 00:02:08.049 test-dma-perf: explicitly disabled via build config 00:02:08.049 test-eventdev: explicitly disabled via build config 00:02:08.049 test-fib: explicitly disabled via build config 00:02:08.049 test-flow-perf: explicitly disabled via build config 00:02:08.049 test-gpudev: explicitly disabled via build config 00:02:08.049 test-mldev: explicitly disabled via build config 00:02:08.049 test-pipeline: explicitly disabled via build config 00:02:08.049 test-pmd: explicitly disabled via build config 00:02:08.049 test-regex: explicitly disabled via build config 00:02:08.049 test-sad: explicitly disabled via build config 00:02:08.049 test-security-perf: explicitly disabled via build config 00:02:08.049 00:02:08.049 libs: 00:02:08.049 argparse: explicitly disabled via build config 00:02:08.049 metrics: explicitly disabled via build config 00:02:08.049 acl: explicitly disabled via build config 00:02:08.049 bbdev: explicitly disabled via build config 00:02:08.049 bitratestats: explicitly disabled via build config 00:02:08.049 bpf: explicitly disabled via build config 00:02:08.049 cfgfile: explicitly disabled via build config 00:02:08.049 distributor: explicitly disabled via build config 00:02:08.049 efd: explicitly disabled via build config 00:02:08.049 eventdev: explicitly disabled via build config 00:02:08.049 dispatcher: explicitly disabled via build config 00:02:08.049 gpudev: explicitly disabled via build config 00:02:08.049 gro: explicitly disabled via build config 00:02:08.049 gso: explicitly disabled via build config 00:02:08.049 ip_frag: explicitly disabled via build config 00:02:08.049 jobstats: explicitly disabled via build config 00:02:08.049 latencystats: explicitly disabled via build config 00:02:08.049 lpm: explicitly disabled via build config 00:02:08.049 member: explicitly disabled via build config 00:02:08.049 pcapng: explicitly disabled via build config 00:02:08.049 rawdev: explicitly disabled via build config 00:02:08.049 regexdev: explicitly disabled via build config 00:02:08.049 mldev: explicitly disabled via build config 00:02:08.049 rib: explicitly disabled via build config 00:02:08.049 sched: explicitly disabled via build config 00:02:08.049 stack: explicitly disabled via build config 00:02:08.049 ipsec: explicitly disabled via build config 00:02:08.049 pdcp: explicitly disabled via build config 00:02:08.049 fib: explicitly disabled via build config 00:02:08.049 port: explicitly disabled via build config 00:02:08.049 pdump: explicitly disabled via build config 00:02:08.049 table: explicitly disabled via build config 00:02:08.049 pipeline: explicitly disabled via build config 00:02:08.049 graph: explicitly disabled via build config 00:02:08.049 node: explicitly disabled via build config 00:02:08.049 00:02:08.049 drivers: 00:02:08.049 common/cpt: not in enabled drivers build config 00:02:08.049 common/dpaax: not in enabled drivers build config 00:02:08.049 common/iavf: not in enabled drivers build config 00:02:08.049 common/idpf: not in enabled drivers build config 00:02:08.049 common/ionic: not in enabled drivers build config 00:02:08.049 common/mvep: not in enabled drivers build config 00:02:08.049 common/octeontx: not in enabled drivers build config 00:02:08.049 bus/auxiliary: not in enabled drivers build config 00:02:08.049 bus/cdx: not in enabled drivers build config 00:02:08.049 bus/dpaa: not in enabled drivers build config 00:02:08.049 bus/fslmc: not in enabled drivers build config 00:02:08.049 bus/ifpga: not in enabled drivers build config 00:02:08.049 bus/platform: not in enabled drivers build config 00:02:08.049 bus/uacce: not in enabled drivers build config 00:02:08.049 bus/vmbus: not in enabled drivers build config 00:02:08.050 common/cnxk: not in enabled drivers build config 00:02:08.050 common/mlx5: not in enabled drivers build config 00:02:08.050 common/nfp: not in enabled drivers build config 00:02:08.050 common/nitrox: not in enabled drivers build config 00:02:08.050 common/qat: not in enabled drivers build config 00:02:08.050 common/sfc_efx: not in enabled drivers build config 00:02:08.050 mempool/bucket: not in enabled drivers build config 00:02:08.050 mempool/cnxk: not in enabled drivers build config 00:02:08.050 mempool/dpaa: not in enabled drivers build config 00:02:08.050 mempool/dpaa2: not in enabled drivers build config 00:02:08.050 mempool/octeontx: not in enabled drivers build config 00:02:08.050 mempool/stack: not in enabled drivers build config 00:02:08.050 dma/cnxk: not in enabled drivers build config 00:02:08.050 dma/dpaa: not in enabled drivers build config 00:02:08.050 dma/dpaa2: not in enabled drivers build config 00:02:08.050 dma/hisilicon: not in enabled drivers build config 00:02:08.050 dma/idxd: not in enabled drivers build config 00:02:08.050 dma/ioat: not in enabled drivers build config 00:02:08.050 dma/skeleton: not in enabled drivers build config 00:02:08.050 net/af_packet: not in enabled drivers build config 00:02:08.050 net/af_xdp: not in enabled drivers build config 00:02:08.050 net/ark: not in enabled drivers build config 00:02:08.050 net/atlantic: not in enabled drivers build config 00:02:08.050 net/avp: not in enabled drivers build config 00:02:08.050 net/axgbe: not in enabled drivers build config 00:02:08.050 net/bnx2x: not in enabled drivers build config 00:02:08.050 net/bnxt: not in enabled drivers build config 00:02:08.050 net/bonding: not in enabled drivers build config 00:02:08.050 net/cnxk: not in enabled drivers build config 00:02:08.050 net/cpfl: not in enabled drivers build config 00:02:08.050 net/cxgbe: not in enabled drivers build config 00:02:08.050 net/dpaa: not in enabled drivers build config 00:02:08.050 net/dpaa2: not in enabled drivers build config 00:02:08.050 net/e1000: not in enabled drivers build config 00:02:08.050 net/ena: not in enabled drivers build config 00:02:08.050 net/enetc: not in enabled drivers build config 00:02:08.050 net/enetfec: not in enabled drivers build config 00:02:08.050 net/enic: not in enabled drivers build config 00:02:08.050 net/failsafe: not in enabled drivers build config 00:02:08.050 net/fm10k: not in enabled drivers build config 00:02:08.050 net/gve: not in enabled drivers build config 00:02:08.050 net/hinic: not in enabled drivers build config 00:02:08.050 net/hns3: not in enabled drivers build config 00:02:08.050 net/i40e: not in enabled drivers build config 00:02:08.050 net/iavf: not in enabled drivers build config 00:02:08.050 net/ice: not in enabled drivers build config 00:02:08.050 net/idpf: not in enabled drivers build config 00:02:08.050 net/igc: not in enabled drivers build config 00:02:08.050 net/ionic: not in enabled drivers build config 00:02:08.050 net/ipn3ke: not in enabled drivers build config 00:02:08.050 net/ixgbe: not in enabled drivers build config 00:02:08.050 net/mana: not in enabled drivers build config 00:02:08.050 net/memif: not in enabled drivers build config 00:02:08.050 net/mlx4: not in enabled drivers build config 00:02:08.050 net/mlx5: not in enabled drivers build config 00:02:08.050 net/mvneta: not in enabled drivers build config 00:02:08.050 net/mvpp2: not in enabled drivers build config 00:02:08.050 net/netvsc: not in enabled drivers build config 00:02:08.050 net/nfb: not in enabled drivers build config 00:02:08.050 net/nfp: not in enabled drivers build config 00:02:08.050 net/ngbe: not in enabled drivers build config 00:02:08.050 net/null: not in enabled drivers build config 00:02:08.050 net/octeontx: not in enabled drivers build config 00:02:08.050 net/octeon_ep: not in enabled drivers build config 00:02:08.050 net/pcap: not in enabled drivers build config 00:02:08.050 net/pfe: not in enabled drivers build config 00:02:08.050 net/qede: not in enabled drivers build config 00:02:08.050 net/ring: not in enabled drivers build config 00:02:08.050 net/sfc: not in enabled drivers build config 00:02:08.050 net/softnic: not in enabled drivers build config 00:02:08.050 net/tap: not in enabled drivers build config 00:02:08.050 net/thunderx: not in enabled drivers build config 00:02:08.050 net/txgbe: not in enabled drivers build config 00:02:08.050 net/vdev_netvsc: not in enabled drivers build config 00:02:08.050 net/vhost: not in enabled drivers build config 00:02:08.050 net/virtio: not in enabled drivers build config 00:02:08.050 net/vmxnet3: not in enabled drivers build config 00:02:08.050 raw/*: missing internal dependency, "rawdev" 00:02:08.050 crypto/armv8: not in enabled drivers build config 00:02:08.050 crypto/bcmfs: not in enabled drivers build config 00:02:08.050 crypto/caam_jr: not in enabled drivers build config 00:02:08.050 crypto/ccp: not in enabled drivers build config 00:02:08.050 crypto/cnxk: not in enabled drivers build config 00:02:08.050 crypto/dpaa_sec: not in enabled drivers build config 00:02:08.050 crypto/dpaa2_sec: not in enabled drivers build config 00:02:08.050 crypto/ipsec_mb: not in enabled drivers build config 00:02:08.050 crypto/mlx5: not in enabled drivers build config 00:02:08.050 crypto/mvsam: not in enabled drivers build config 00:02:08.050 crypto/nitrox: not in enabled drivers build config 00:02:08.050 crypto/null: not in enabled drivers build config 00:02:08.050 crypto/octeontx: not in enabled drivers build config 00:02:08.050 crypto/openssl: not in enabled drivers build config 00:02:08.050 crypto/scheduler: not in enabled drivers build config 00:02:08.050 crypto/uadk: not in enabled drivers build config 00:02:08.050 crypto/virtio: not in enabled drivers build config 00:02:08.050 compress/isal: not in enabled drivers build config 00:02:08.050 compress/mlx5: not in enabled drivers build config 00:02:08.050 compress/nitrox: not in enabled drivers build config 00:02:08.050 compress/octeontx: not in enabled drivers build config 00:02:08.050 compress/zlib: not in enabled drivers build config 00:02:08.050 regex/*: missing internal dependency, "regexdev" 00:02:08.050 ml/*: missing internal dependency, "mldev" 00:02:08.050 vdpa/ifc: not in enabled drivers build config 00:02:08.050 vdpa/mlx5: not in enabled drivers build config 00:02:08.050 vdpa/nfp: not in enabled drivers build config 00:02:08.050 vdpa/sfc: not in enabled drivers build config 00:02:08.050 event/*: missing internal dependency, "eventdev" 00:02:08.050 baseband/*: missing internal dependency, "bbdev" 00:02:08.050 gpu/*: missing internal dependency, "gpudev" 00:02:08.050 00:02:08.050 00:02:08.311 Build targets in project: 84 00:02:08.311 00:02:08.311 DPDK 24.03.0 00:02:08.311 00:02:08.311 User defined options 00:02:08.311 buildtype : debug 00:02:08.311 default_library : shared 00:02:08.311 libdir : lib 00:02:08.311 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:08.311 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:08.311 c_link_args : 00:02:08.311 cpu_instruction_set: native 00:02:08.311 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:08.311 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:08.311 enable_docs : false 00:02:08.311 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:08.311 enable_kmods : false 00:02:08.311 max_lcores : 128 00:02:08.311 tests : false 00:02:08.311 00:02:08.311 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:08.573 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:08.839 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:08.839 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:08.839 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:08.839 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:08.839 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:08.839 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:08.839 [7/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:08.839 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:08.839 [9/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:08.839 [10/267] Linking static target lib/librte_kvargs.a 00:02:08.839 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:08.839 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:08.839 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:08.839 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:08.839 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:08.839 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:09.098 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:09.098 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:09.098 [19/267] Linking static target lib/librte_log.a 00:02:09.098 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:09.098 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:09.098 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:09.098 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:09.098 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:09.098 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:09.098 [26/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:09.098 [27/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:09.098 [28/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:09.098 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:09.098 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:09.098 [31/267] Linking static target lib/librte_pci.a 00:02:09.098 [32/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:09.098 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:09.098 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:09.098 [35/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:09.098 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:09.098 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:09.357 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:09.357 [39/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:09.357 [40/267] Linking static target lib/librte_meter.a 00:02:09.357 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:09.357 [42/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:09.357 [43/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.357 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:09.357 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:09.357 [46/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.357 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:09.357 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:09.357 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:09.357 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:09.357 [51/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:09.357 [52/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:09.357 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:09.357 [54/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:09.357 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:09.357 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:09.357 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:09.357 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:09.357 [59/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:09.357 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:09.357 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:09.357 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:09.357 [63/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:09.357 [64/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:09.357 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:09.357 [66/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:09.357 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:09.357 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:09.357 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:09.357 [70/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:09.357 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:09.357 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:09.357 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:09.357 [74/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:09.357 [75/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:09.357 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:09.357 [77/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:09.357 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:09.357 [79/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:09.357 [80/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:09.357 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:09.357 [82/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:09.357 [83/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:09.357 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:09.357 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:09.357 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:09.357 [87/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:09.357 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:09.357 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:09.357 [90/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:09.357 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:09.357 [92/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:09.357 [93/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:09.357 [94/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:09.617 [95/267] Linking static target lib/librte_telemetry.a 00:02:09.617 [96/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:09.617 [97/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:09.617 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:09.617 [99/267] Linking static target lib/librte_ring.a 00:02:09.617 [100/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:09.617 [101/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:09.617 [102/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:09.617 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:09.617 [104/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:09.617 [105/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:09.617 [106/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:09.617 [107/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:09.617 [108/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:09.617 [109/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:09.617 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:09.617 [111/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:09.617 [112/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:09.617 [113/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:09.617 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:09.617 [115/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:09.617 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:09.617 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:09.617 [118/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:09.617 [119/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:09.617 [120/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:09.617 [121/267] Linking static target lib/librte_mempool.a 00:02:09.617 [122/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:09.617 [123/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:09.617 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:09.617 [125/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:09.617 [126/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:09.617 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:09.617 [128/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:09.617 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:09.617 [130/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:09.617 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:09.617 [132/267] Linking static target lib/librte_rcu.a 00:02:09.617 [133/267] Linking static target lib/librte_cmdline.a 00:02:09.617 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:09.617 [135/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.617 [136/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:09.617 [137/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:09.617 [138/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:09.618 [139/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:09.618 [140/267] Linking static target lib/librte_timer.a 00:02:09.618 [141/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:09.618 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:09.618 [143/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:09.618 [144/267] Linking static target lib/librte_net.a 00:02:09.618 [145/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:09.618 [146/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.618 [147/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:09.618 [148/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:09.618 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:09.618 [150/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:09.618 [151/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:09.618 [152/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:09.618 [153/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:09.618 [154/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:09.618 [155/267] Linking target lib/librte_log.so.24.1 00:02:09.618 [156/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:09.618 [157/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:09.618 [158/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:09.618 [159/267] Linking static target lib/librte_dmadev.a 00:02:09.618 [160/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:09.618 [161/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:09.618 [162/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:09.618 [163/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:09.618 [164/267] Linking static target lib/librte_power.a 00:02:09.618 [165/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:09.618 [166/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:09.618 [167/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:09.618 [168/267] Linking static target lib/librte_eal.a 00:02:09.618 [169/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:09.618 [170/267] Linking static target lib/librte_reorder.a 00:02:09.618 [171/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:09.618 [172/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:09.618 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:09.618 [174/267] Linking static target lib/librte_security.a 00:02:09.618 [175/267] Linking static target lib/librte_compressdev.a 00:02:09.618 [176/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:09.618 [177/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:09.878 [178/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:09.878 [179/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:09.878 [180/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:09.878 [181/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:09.878 [182/267] Linking target lib/librte_kvargs.so.24.1 00:02:09.878 [183/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:09.878 [184/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:09.878 [185/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:09.878 [186/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:09.878 [187/267] Linking static target lib/librte_mbuf.a 00:02:09.878 [188/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:09.878 [189/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.878 [190/267] Linking static target drivers/librte_bus_vdev.a 00:02:09.878 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:09.878 [192/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:09.878 [193/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:09.878 [194/267] Linking static target lib/librte_hash.a 00:02:09.878 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:09.878 [196/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:09.878 [197/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:09.878 [198/267] Linking static target lib/librte_cryptodev.a 00:02:09.878 [199/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.878 [200/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:09.878 [201/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.878 [202/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:09.878 [203/267] Linking static target drivers/librte_bus_pci.a 00:02:09.878 [204/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:10.138 [205/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:10.138 [206/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:10.138 [207/267] Linking static target drivers/librte_mempool_ring.a 00:02:10.138 [208/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.138 [209/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.138 [210/267] Linking target lib/librte_telemetry.so.24.1 00:02:10.138 [211/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.138 [212/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:10.138 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.397 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:10.397 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.397 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.397 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.397 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:10.658 [219/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.658 [220/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:10.658 [221/267] Linking static target lib/librte_ethdev.a 00:02:10.658 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.658 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.920 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.920 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.920 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.492 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:11.492 [228/267] Linking static target lib/librte_vhost.a 00:02:12.072 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.527 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.119 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.063 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.063 [233/267] Linking target lib/librte_eal.so.24.1 00:02:21.063 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:21.063 [235/267] Linking target lib/librte_ring.so.24.1 00:02:21.063 [236/267] Linking target lib/librte_timer.so.24.1 00:02:21.063 [237/267] Linking target lib/librte_meter.so.24.1 00:02:21.063 [238/267] Linking target lib/librte_pci.so.24.1 00:02:21.063 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:21.063 [240/267] Linking target lib/librte_dmadev.so.24.1 00:02:21.325 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:21.325 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:21.325 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:21.325 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:21.325 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:21.325 [246/267] Linking target lib/librte_rcu.so.24.1 00:02:21.325 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:21.325 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:21.325 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:21.325 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:21.587 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:21.587 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:21.587 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:21.587 [254/267] Linking target lib/librte_compressdev.so.24.1 00:02:21.587 [255/267] Linking target lib/librte_net.so.24.1 00:02:21.587 [256/267] Linking target lib/librte_reorder.so.24.1 00:02:21.587 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:21.848 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:21.848 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:21.848 [260/267] Linking target lib/librte_cmdline.so.24.1 00:02:21.848 [261/267] Linking target lib/librte_hash.so.24.1 00:02:21.848 [262/267] Linking target lib/librte_security.so.24.1 00:02:21.848 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:21.848 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:22.109 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:22.109 [266/267] Linking target lib/librte_power.so.24.1 00:02:22.109 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:22.109 INFO: autodetecting backend as ninja 00:02:22.109 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:26.318 CC lib/ut/ut.o 00:02:26.318 CC lib/log/log.o 00:02:26.318 CC lib/ut_mock/mock.o 00:02:26.318 CC lib/log/log_deprecated.o 00:02:26.318 CC lib/log/log_flags.o 00:02:26.318 LIB libspdk_ut.a 00:02:26.318 LIB libspdk_ut_mock.a 00:02:26.318 LIB libspdk_log.a 00:02:26.318 SO libspdk_ut.so.2.0 00:02:26.318 SO libspdk_ut_mock.so.6.0 00:02:26.318 SO libspdk_log.so.7.1 00:02:26.318 SYMLINK libspdk_ut.so 00:02:26.318 SYMLINK libspdk_ut_mock.so 00:02:26.318 SYMLINK libspdk_log.so 00:02:26.579 CC lib/util/base64.o 00:02:26.579 CC lib/util/bit_array.o 00:02:26.579 CC lib/util/cpuset.o 00:02:26.579 CC lib/util/crc16.o 00:02:26.579 CC lib/util/crc32.o 00:02:26.579 CXX lib/trace_parser/trace.o 00:02:26.579 CC lib/util/crc32c.o 00:02:26.579 CC lib/dma/dma.o 00:02:26.579 CC lib/ioat/ioat.o 00:02:26.579 CC lib/util/crc32_ieee.o 00:02:26.579 CC lib/util/crc64.o 00:02:26.579 CC lib/util/dif.o 00:02:26.579 CC lib/util/fd.o 00:02:26.579 CC lib/util/fd_group.o 00:02:26.579 CC lib/util/file.o 00:02:26.579 CC lib/util/hexlify.o 00:02:26.579 CC lib/util/iov.o 00:02:26.579 CC lib/util/math.o 00:02:26.579 CC lib/util/net.o 00:02:26.579 CC lib/util/pipe.o 00:02:26.579 CC lib/util/strerror_tls.o 00:02:26.579 CC lib/util/string.o 00:02:26.579 CC lib/util/uuid.o 00:02:26.579 CC lib/util/xor.o 00:02:26.579 CC lib/util/zipf.o 00:02:26.579 CC lib/util/md5.o 00:02:26.579 CC lib/vfio_user/host/vfio_user_pci.o 00:02:26.579 CC lib/vfio_user/host/vfio_user.o 00:02:26.840 LIB libspdk_ioat.a 00:02:26.840 SO libspdk_ioat.so.7.0 00:02:26.840 LIB libspdk_dma.a 00:02:26.840 SO libspdk_dma.so.5.0 00:02:26.840 SYMLINK libspdk_ioat.so 00:02:26.840 SYMLINK libspdk_dma.so 00:02:26.840 LIB libspdk_vfio_user.a 00:02:27.101 SO libspdk_vfio_user.so.5.0 00:02:27.101 LIB libspdk_util.a 00:02:27.101 SYMLINK libspdk_vfio_user.so 00:02:27.101 SO libspdk_util.so.10.0 00:02:27.101 SYMLINK libspdk_util.so 00:02:27.362 LIB libspdk_trace_parser.a 00:02:27.362 SO libspdk_trace_parser.so.6.0 00:02:27.362 SYMLINK libspdk_trace_parser.so 00:02:27.623 CC lib/json/json_parse.o 00:02:27.623 CC lib/json/json_write.o 00:02:27.623 CC lib/json/json_util.o 00:02:27.623 CC lib/idxd/idxd.o 00:02:27.623 CC lib/idxd/idxd_user.o 00:02:27.623 CC lib/idxd/idxd_kernel.o 00:02:27.623 CC lib/rdma_utils/rdma_utils.o 00:02:27.623 CC lib/conf/conf.o 00:02:27.623 CC lib/vmd/vmd.o 00:02:27.623 CC lib/env_dpdk/env.o 00:02:27.623 CC lib/vmd/led.o 00:02:27.623 CC lib/env_dpdk/memory.o 00:02:27.623 CC lib/env_dpdk/pci.o 00:02:27.623 CC lib/env_dpdk/init.o 00:02:27.623 CC lib/env_dpdk/threads.o 00:02:27.623 CC lib/env_dpdk/pci_ioat.o 00:02:27.623 CC lib/env_dpdk/pci_virtio.o 00:02:27.623 CC lib/env_dpdk/pci_vmd.o 00:02:27.623 CC lib/env_dpdk/pci_idxd.o 00:02:27.623 CC lib/env_dpdk/pci_event.o 00:02:27.623 CC lib/env_dpdk/sigbus_handler.o 00:02:27.623 CC lib/env_dpdk/pci_dpdk.o 00:02:27.623 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:27.623 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:27.885 LIB libspdk_conf.a 00:02:27.885 LIB libspdk_json.a 00:02:27.885 SO libspdk_conf.so.6.0 00:02:27.885 LIB libspdk_rdma_utils.a 00:02:27.885 SO libspdk_json.so.6.0 00:02:27.885 SO libspdk_rdma_utils.so.1.0 00:02:27.885 SYMLINK libspdk_conf.so 00:02:27.885 SYMLINK libspdk_json.so 00:02:27.885 SYMLINK libspdk_rdma_utils.so 00:02:28.147 LIB libspdk_idxd.a 00:02:28.147 SO libspdk_idxd.so.12.1 00:02:28.147 LIB libspdk_vmd.a 00:02:28.147 SO libspdk_vmd.so.6.0 00:02:28.147 SYMLINK libspdk_idxd.so 00:02:28.147 SYMLINK libspdk_vmd.so 00:02:28.408 CC lib/jsonrpc/jsonrpc_server.o 00:02:28.408 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:28.408 CC lib/jsonrpc/jsonrpc_client.o 00:02:28.408 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:28.408 CC lib/rdma_provider/common.o 00:02:28.408 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:28.668 LIB libspdk_jsonrpc.a 00:02:28.668 LIB libspdk_rdma_provider.a 00:02:28.668 SO libspdk_rdma_provider.so.7.0 00:02:28.668 SO libspdk_jsonrpc.so.6.0 00:02:28.668 SYMLINK libspdk_rdma_provider.so 00:02:28.668 SYMLINK libspdk_jsonrpc.so 00:02:28.929 LIB libspdk_env_dpdk.a 00:02:28.929 SO libspdk_env_dpdk.so.15.0 00:02:28.929 CC lib/rpc/rpc.o 00:02:28.929 SYMLINK libspdk_env_dpdk.so 00:02:29.190 LIB libspdk_rpc.a 00:02:29.190 SO libspdk_rpc.so.6.0 00:02:29.451 SYMLINK libspdk_rpc.so 00:02:29.711 CC lib/trace/trace_flags.o 00:02:29.711 CC lib/trace/trace_rpc.o 00:02:29.711 CC lib/trace/trace.o 00:02:29.711 CC lib/notify/notify.o 00:02:29.711 CC lib/notify/notify_rpc.o 00:02:29.711 CC lib/keyring/keyring.o 00:02:29.711 CC lib/keyring/keyring_rpc.o 00:02:29.972 LIB libspdk_notify.a 00:02:29.972 SO libspdk_notify.so.6.0 00:02:29.972 LIB libspdk_trace.a 00:02:29.972 LIB libspdk_keyring.a 00:02:29.972 SO libspdk_trace.so.11.0 00:02:29.972 SO libspdk_keyring.so.2.0 00:02:29.972 SYMLINK libspdk_notify.so 00:02:29.972 SYMLINK libspdk_keyring.so 00:02:29.972 SYMLINK libspdk_trace.so 00:02:30.545 CC lib/thread/thread.o 00:02:30.545 CC lib/thread/iobuf.o 00:02:30.545 CC lib/sock/sock.o 00:02:30.545 CC lib/sock/sock_rpc.o 00:02:30.806 LIB libspdk_sock.a 00:02:30.806 SO libspdk_sock.so.10.0 00:02:30.806 SYMLINK libspdk_sock.so 00:02:31.067 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:31.067 CC lib/nvme/nvme_ctrlr.o 00:02:31.067 CC lib/nvme/nvme_fabric.o 00:02:31.067 CC lib/nvme/nvme_ns_cmd.o 00:02:31.067 CC lib/nvme/nvme_pcie.o 00:02:31.067 CC lib/nvme/nvme_ns.o 00:02:31.067 CC lib/nvme/nvme_pcie_common.o 00:02:31.067 CC lib/nvme/nvme_qpair.o 00:02:31.067 CC lib/nvme/nvme.o 00:02:31.067 CC lib/nvme/nvme_quirks.o 00:02:31.067 CC lib/nvme/nvme_transport.o 00:02:31.067 CC lib/nvme/nvme_discovery.o 00:02:31.067 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:31.067 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:31.067 CC lib/nvme/nvme_tcp.o 00:02:31.067 CC lib/nvme/nvme_opal.o 00:02:31.067 CC lib/nvme/nvme_io_msg.o 00:02:31.067 CC lib/nvme/nvme_poll_group.o 00:02:31.067 CC lib/nvme/nvme_zns.o 00:02:31.067 CC lib/nvme/nvme_stubs.o 00:02:31.067 CC lib/nvme/nvme_auth.o 00:02:31.067 CC lib/nvme/nvme_cuse.o 00:02:31.067 CC lib/nvme/nvme_vfio_user.o 00:02:31.067 CC lib/nvme/nvme_rdma.o 00:02:31.642 LIB libspdk_thread.a 00:02:31.642 SO libspdk_thread.so.10.2 00:02:31.642 SYMLINK libspdk_thread.so 00:02:31.904 CC lib/blob/blobstore.o 00:02:31.904 CC lib/blob/zeroes.o 00:02:31.904 CC lib/blob/request.o 00:02:31.904 CC lib/blob/blob_bs_dev.o 00:02:31.904 CC lib/accel/accel.o 00:02:31.904 CC lib/accel/accel_rpc.o 00:02:31.904 CC lib/accel/accel_sw.o 00:02:31.904 CC lib/virtio/virtio.o 00:02:31.904 CC lib/fsdev/fsdev.o 00:02:31.904 CC lib/virtio/virtio_vhost_user.o 00:02:31.904 CC lib/virtio/virtio_vfio_user.o 00:02:31.904 CC lib/virtio/virtio_pci.o 00:02:31.904 CC lib/init/subsystem.o 00:02:31.904 CC lib/init/json_config.o 00:02:31.904 CC lib/fsdev/fsdev_io.o 00:02:31.904 CC lib/vfu_tgt/tgt_endpoint.o 00:02:31.904 CC lib/fsdev/fsdev_rpc.o 00:02:31.904 CC lib/vfu_tgt/tgt_rpc.o 00:02:31.904 CC lib/init/subsystem_rpc.o 00:02:31.904 CC lib/init/rpc.o 00:02:32.476 LIB libspdk_init.a 00:02:32.476 SO libspdk_init.so.6.0 00:02:32.476 LIB libspdk_vfu_tgt.a 00:02:32.476 LIB libspdk_virtio.a 00:02:32.476 SO libspdk_virtio.so.7.0 00:02:32.476 SO libspdk_vfu_tgt.so.3.0 00:02:32.476 SYMLINK libspdk_init.so 00:02:32.476 SYMLINK libspdk_virtio.so 00:02:32.476 SYMLINK libspdk_vfu_tgt.so 00:02:32.738 LIB libspdk_fsdev.a 00:02:32.738 SO libspdk_fsdev.so.1.0 00:02:32.738 CC lib/event/app.o 00:02:32.738 CC lib/event/reactor.o 00:02:32.738 CC lib/event/app_rpc.o 00:02:32.738 CC lib/event/log_rpc.o 00:02:32.738 CC lib/event/scheduler_static.o 00:02:32.738 SYMLINK libspdk_fsdev.so 00:02:33.000 LIB libspdk_accel.a 00:02:33.000 SO libspdk_accel.so.16.0 00:02:33.000 LIB libspdk_nvme.a 00:02:33.262 SYMLINK libspdk_accel.so 00:02:33.262 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:33.262 SO libspdk_nvme.so.14.0 00:02:33.262 LIB libspdk_event.a 00:02:33.262 SO libspdk_event.so.14.0 00:02:33.262 SYMLINK libspdk_event.so 00:02:33.524 SYMLINK libspdk_nvme.so 00:02:33.524 CC lib/bdev/bdev.o 00:02:33.524 CC lib/bdev/bdev_rpc.o 00:02:33.524 CC lib/bdev/bdev_zone.o 00:02:33.524 CC lib/bdev/part.o 00:02:33.524 CC lib/bdev/scsi_nvme.o 00:02:33.797 LIB libspdk_fuse_dispatcher.a 00:02:33.797 SO libspdk_fuse_dispatcher.so.1.0 00:02:33.797 SYMLINK libspdk_fuse_dispatcher.so 00:02:34.740 LIB libspdk_blob.a 00:02:34.740 SO libspdk_blob.so.11.0 00:02:34.740 SYMLINK libspdk_blob.so 00:02:35.001 CC lib/lvol/lvol.o 00:02:35.001 CC lib/blobfs/blobfs.o 00:02:35.001 CC lib/blobfs/tree.o 00:02:35.945 LIB libspdk_bdev.a 00:02:35.945 SO libspdk_bdev.so.17.0 00:02:35.945 LIB libspdk_blobfs.a 00:02:35.945 SO libspdk_blobfs.so.10.0 00:02:35.945 LIB libspdk_lvol.a 00:02:35.945 SYMLINK libspdk_bdev.so 00:02:35.945 SYMLINK libspdk_blobfs.so 00:02:35.945 SO libspdk_lvol.so.10.0 00:02:35.945 SYMLINK libspdk_lvol.so 00:02:36.207 CC lib/ftl/ftl_core.o 00:02:36.207 CC lib/ftl/ftl_init.o 00:02:36.207 CC lib/ftl/ftl_layout.o 00:02:36.207 CC lib/ftl/ftl_debug.o 00:02:36.207 CC lib/ftl/ftl_l2p.o 00:02:36.207 CC lib/ftl/ftl_io.o 00:02:36.207 CC lib/ftl/ftl_sb.o 00:02:36.207 CC lib/ftl/ftl_l2p_flat.o 00:02:36.207 CC lib/ftl/ftl_nv_cache.o 00:02:36.207 CC lib/ftl/ftl_band.o 00:02:36.207 CC lib/ftl/ftl_writer.o 00:02:36.207 CC lib/ftl/ftl_band_ops.o 00:02:36.207 CC lib/ftl/ftl_rq.o 00:02:36.207 CC lib/ftl/ftl_reloc.o 00:02:36.207 CC lib/ftl/ftl_l2p_cache.o 00:02:36.207 CC lib/nvmf/ctrlr.o 00:02:36.207 CC lib/nvmf/ctrlr_discovery.o 00:02:36.207 CC lib/ftl/ftl_p2l.o 00:02:36.207 CC lib/ftl/mngt/ftl_mngt.o 00:02:36.207 CC lib/ftl/ftl_p2l_log.o 00:02:36.207 CC lib/nbd/nbd.o 00:02:36.207 CC lib/nvmf/ctrlr_bdev.o 00:02:36.207 CC lib/nbd/nbd_rpc.o 00:02:36.207 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:36.207 CC lib/nvmf/subsystem.o 00:02:36.207 CC lib/ublk/ublk.o 00:02:36.207 CC lib/nvmf/nvmf.o 00:02:36.207 CC lib/nvmf/nvmf_rpc.o 00:02:36.207 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:36.207 CC lib/scsi/dev.o 00:02:36.207 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:36.207 CC lib/ublk/ublk_rpc.o 00:02:36.207 CC lib/scsi/lun.o 00:02:36.207 CC lib/nvmf/transport.o 00:02:36.207 CC lib/scsi/port.o 00:02:36.207 CC lib/nvmf/tcp.o 00:02:36.207 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:36.207 CC lib/scsi/scsi.o 00:02:36.207 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:36.207 CC lib/nvmf/stubs.o 00:02:36.207 CC lib/scsi/scsi_bdev.o 00:02:36.207 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:36.207 CC lib/nvmf/mdns_server.o 00:02:36.207 CC lib/nvmf/rdma.o 00:02:36.207 CC lib/scsi/scsi_pr.o 00:02:36.207 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:36.207 CC lib/nvmf/vfio_user.o 00:02:36.207 CC lib/scsi/scsi_rpc.o 00:02:36.207 CC lib/scsi/task.o 00:02:36.207 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:36.207 CC lib/nvmf/auth.o 00:02:36.207 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:36.207 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:36.207 CC lib/ftl/utils/ftl_conf.o 00:02:36.207 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:36.207 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:36.207 CC lib/ftl/utils/ftl_mempool.o 00:02:36.207 CC lib/ftl/utils/ftl_md.o 00:02:36.207 CC lib/ftl/utils/ftl_bitmap.o 00:02:36.207 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:36.207 CC lib/ftl/utils/ftl_property.o 00:02:36.207 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:36.207 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:36.207 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:36.207 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:36.207 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:36.207 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:36.207 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:36.207 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:36.466 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:36.466 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:36.466 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:36.466 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:36.466 CC lib/ftl/base/ftl_base_dev.o 00:02:36.466 CC lib/ftl/ftl_trace.o 00:02:36.466 CC lib/ftl/base/ftl_base_bdev.o 00:02:36.725 LIB libspdk_nbd.a 00:02:36.725 SO libspdk_nbd.so.7.0 00:02:36.987 LIB libspdk_scsi.a 00:02:36.987 SYMLINK libspdk_nbd.so 00:02:36.987 SO libspdk_scsi.so.9.0 00:02:36.987 LIB libspdk_ublk.a 00:02:36.987 SO libspdk_ublk.so.3.0 00:02:36.987 SYMLINK libspdk_scsi.so 00:02:36.987 SYMLINK libspdk_ublk.so 00:02:37.248 LIB libspdk_ftl.a 00:02:37.248 CC lib/iscsi/conn.o 00:02:37.248 CC lib/iscsi/init_grp.o 00:02:37.248 CC lib/iscsi/iscsi.o 00:02:37.248 CC lib/vhost/vhost.o 00:02:37.248 CC lib/iscsi/param.o 00:02:37.248 CC lib/iscsi/tgt_node.o 00:02:37.248 CC lib/vhost/vhost_rpc.o 00:02:37.248 CC lib/iscsi/portal_grp.o 00:02:37.248 CC lib/vhost/vhost_scsi.o 00:02:37.248 CC lib/vhost/rte_vhost_user.o 00:02:37.248 CC lib/vhost/vhost_blk.o 00:02:37.248 CC lib/iscsi/iscsi_subsystem.o 00:02:37.249 CC lib/iscsi/iscsi_rpc.o 00:02:37.249 CC lib/iscsi/task.o 00:02:37.510 SO libspdk_ftl.so.9.0 00:02:37.770 SYMLINK libspdk_ftl.so 00:02:38.032 LIB libspdk_nvmf.a 00:02:38.293 SO libspdk_nvmf.so.19.1 00:02:38.293 LIB libspdk_vhost.a 00:02:38.293 SO libspdk_vhost.so.8.0 00:02:38.293 SYMLINK libspdk_nvmf.so 00:02:38.553 SYMLINK libspdk_vhost.so 00:02:38.553 LIB libspdk_iscsi.a 00:02:38.553 SO libspdk_iscsi.so.8.0 00:02:38.814 SYMLINK libspdk_iscsi.so 00:02:39.386 CC module/env_dpdk/env_dpdk_rpc.o 00:02:39.386 CC module/vfu_device/vfu_virtio.o 00:02:39.386 CC module/vfu_device/vfu_virtio_blk.o 00:02:39.386 CC module/vfu_device/vfu_virtio_scsi.o 00:02:39.386 CC module/vfu_device/vfu_virtio_rpc.o 00:02:39.386 CC module/vfu_device/vfu_virtio_fs.o 00:02:39.386 CC module/keyring/linux/keyring.o 00:02:39.386 CC module/keyring/linux/keyring_rpc.o 00:02:39.386 CC module/keyring/file/keyring.o 00:02:39.386 CC module/keyring/file/keyring_rpc.o 00:02:39.386 CC module/fsdev/aio/fsdev_aio.o 00:02:39.386 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:39.386 CC module/fsdev/aio/linux_aio_mgr.o 00:02:39.386 LIB libspdk_env_dpdk_rpc.a 00:02:39.386 CC module/accel/error/accel_error.o 00:02:39.386 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:39.386 CC module/accel/error/accel_error_rpc.o 00:02:39.386 CC module/accel/ioat/accel_ioat.o 00:02:39.386 CC module/accel/ioat/accel_ioat_rpc.o 00:02:39.386 CC module/blob/bdev/blob_bdev.o 00:02:39.386 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:39.386 CC module/sock/posix/posix.o 00:02:39.386 CC module/accel/dsa/accel_dsa.o 00:02:39.386 CC module/accel/dsa/accel_dsa_rpc.o 00:02:39.386 CC module/scheduler/gscheduler/gscheduler.o 00:02:39.386 CC module/accel/iaa/accel_iaa_rpc.o 00:02:39.386 CC module/accel/iaa/accel_iaa.o 00:02:39.386 SO libspdk_env_dpdk_rpc.so.6.0 00:02:39.647 SYMLINK libspdk_env_dpdk_rpc.so 00:02:39.647 LIB libspdk_keyring_file.a 00:02:39.647 LIB libspdk_keyring_linux.a 00:02:39.647 LIB libspdk_scheduler_dpdk_governor.a 00:02:39.648 SO libspdk_keyring_file.so.2.0 00:02:39.648 LIB libspdk_accel_error.a 00:02:39.648 SO libspdk_keyring_linux.so.1.0 00:02:39.648 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:39.648 LIB libspdk_scheduler_gscheduler.a 00:02:39.648 LIB libspdk_accel_ioat.a 00:02:39.648 SO libspdk_scheduler_gscheduler.so.4.0 00:02:39.648 LIB libspdk_scheduler_dynamic.a 00:02:39.648 SO libspdk_accel_error.so.2.0 00:02:39.648 SO libspdk_accel_ioat.so.6.0 00:02:39.648 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:39.648 LIB libspdk_accel_iaa.a 00:02:39.648 SYMLINK libspdk_keyring_file.so 00:02:39.648 SO libspdk_scheduler_dynamic.so.4.0 00:02:39.648 SYMLINK libspdk_keyring_linux.so 00:02:39.648 SYMLINK libspdk_scheduler_gscheduler.so 00:02:39.909 LIB libspdk_blob_bdev.a 00:02:39.910 SO libspdk_accel_iaa.so.3.0 00:02:39.910 LIB libspdk_accel_dsa.a 00:02:39.910 SYMLINK libspdk_accel_error.so 00:02:39.910 SO libspdk_blob_bdev.so.11.0 00:02:39.910 SYMLINK libspdk_accel_ioat.so 00:02:39.910 SYMLINK libspdk_scheduler_dynamic.so 00:02:39.910 SO libspdk_accel_dsa.so.5.0 00:02:39.910 SYMLINK libspdk_accel_iaa.so 00:02:39.910 SYMLINK libspdk_blob_bdev.so 00:02:39.910 LIB libspdk_vfu_device.a 00:02:39.910 SYMLINK libspdk_accel_dsa.so 00:02:39.910 SO libspdk_vfu_device.so.3.0 00:02:40.171 SYMLINK libspdk_vfu_device.so 00:02:40.171 LIB libspdk_fsdev_aio.a 00:02:40.171 SO libspdk_fsdev_aio.so.1.0 00:02:40.171 LIB libspdk_sock_posix.a 00:02:40.171 SYMLINK libspdk_fsdev_aio.so 00:02:40.171 SO libspdk_sock_posix.so.6.0 00:02:40.432 SYMLINK libspdk_sock_posix.so 00:02:40.432 CC module/bdev/lvol/vbdev_lvol.o 00:02:40.432 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:40.432 CC module/blobfs/bdev/blobfs_bdev.o 00:02:40.432 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:40.432 CC module/bdev/malloc/bdev_malloc.o 00:02:40.432 CC module/bdev/null/bdev_null.o 00:02:40.432 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:40.432 CC module/bdev/null/bdev_null_rpc.o 00:02:40.432 CC module/bdev/ftl/bdev_ftl.o 00:02:40.432 CC module/bdev/raid/bdev_raid.o 00:02:40.432 CC module/bdev/raid/bdev_raid_rpc.o 00:02:40.432 CC module/bdev/raid/raid0.o 00:02:40.433 CC module/bdev/raid/bdev_raid_sb.o 00:02:40.433 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:40.433 CC module/bdev/aio/bdev_aio.o 00:02:40.433 CC module/bdev/passthru/vbdev_passthru.o 00:02:40.433 CC module/bdev/aio/bdev_aio_rpc.o 00:02:40.433 CC module/bdev/gpt/gpt.o 00:02:40.433 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:40.433 CC module/bdev/raid/raid1.o 00:02:40.433 CC module/bdev/gpt/vbdev_gpt.o 00:02:40.433 CC module/bdev/delay/vbdev_delay.o 00:02:40.433 CC module/bdev/raid/concat.o 00:02:40.433 CC module/bdev/nvme/bdev_nvme.o 00:02:40.433 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:40.433 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:40.433 CC module/bdev/split/vbdev_split.o 00:02:40.433 CC module/bdev/error/vbdev_error.o 00:02:40.433 CC module/bdev/error/vbdev_error_rpc.o 00:02:40.433 CC module/bdev/split/vbdev_split_rpc.o 00:02:40.433 CC module/bdev/nvme/nvme_rpc.o 00:02:40.433 CC module/bdev/nvme/bdev_mdns_client.o 00:02:40.433 CC module/bdev/nvme/vbdev_opal.o 00:02:40.433 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:40.433 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:40.433 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:40.433 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:40.433 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:40.433 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:40.433 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:40.433 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:40.433 CC module/bdev/iscsi/bdev_iscsi.o 00:02:40.694 LIB libspdk_blobfs_bdev.a 00:02:40.694 SO libspdk_blobfs_bdev.so.6.0 00:02:40.694 LIB libspdk_bdev_null.a 00:02:40.694 LIB libspdk_bdev_error.a 00:02:40.694 LIB libspdk_bdev_split.a 00:02:40.694 SYMLINK libspdk_blobfs_bdev.so 00:02:40.955 SO libspdk_bdev_error.so.6.0 00:02:40.955 LIB libspdk_bdev_ftl.a 00:02:40.955 LIB libspdk_bdev_gpt.a 00:02:40.955 SO libspdk_bdev_null.so.6.0 00:02:40.955 SO libspdk_bdev_split.so.6.0 00:02:40.955 LIB libspdk_bdev_passthru.a 00:02:40.955 LIB libspdk_bdev_aio.a 00:02:40.955 SO libspdk_bdev_ftl.so.6.0 00:02:40.955 SO libspdk_bdev_gpt.so.6.0 00:02:40.955 SO libspdk_bdev_passthru.so.6.0 00:02:40.955 SO libspdk_bdev_aio.so.6.0 00:02:40.955 LIB libspdk_bdev_zone_block.a 00:02:40.955 SYMLINK libspdk_bdev_error.so 00:02:40.955 LIB libspdk_bdev_malloc.a 00:02:40.955 SYMLINK libspdk_bdev_null.so 00:02:40.955 SYMLINK libspdk_bdev_split.so 00:02:40.955 LIB libspdk_bdev_iscsi.a 00:02:40.955 LIB libspdk_bdev_delay.a 00:02:40.955 SO libspdk_bdev_zone_block.so.6.0 00:02:40.955 SYMLINK libspdk_bdev_ftl.so 00:02:40.955 SO libspdk_bdev_malloc.so.6.0 00:02:40.955 SYMLINK libspdk_bdev_gpt.so 00:02:40.955 SO libspdk_bdev_iscsi.so.6.0 00:02:40.955 SYMLINK libspdk_bdev_passthru.so 00:02:40.955 SYMLINK libspdk_bdev_aio.so 00:02:40.955 SO libspdk_bdev_delay.so.6.0 00:02:40.955 SYMLINK libspdk_bdev_zone_block.so 00:02:40.955 SYMLINK libspdk_bdev_malloc.so 00:02:40.955 LIB libspdk_bdev_lvol.a 00:02:40.955 SYMLINK libspdk_bdev_iscsi.so 00:02:40.955 SYMLINK libspdk_bdev_delay.so 00:02:40.955 LIB libspdk_bdev_virtio.a 00:02:40.955 SO libspdk_bdev_lvol.so.6.0 00:02:41.217 SO libspdk_bdev_virtio.so.6.0 00:02:41.217 SYMLINK libspdk_bdev_lvol.so 00:02:41.217 SYMLINK libspdk_bdev_virtio.so 00:02:41.479 LIB libspdk_bdev_raid.a 00:02:41.479 SO libspdk_bdev_raid.so.6.0 00:02:41.479 SYMLINK libspdk_bdev_raid.so 00:02:42.422 LIB libspdk_bdev_nvme.a 00:02:42.683 SO libspdk_bdev_nvme.so.7.0 00:02:42.683 SYMLINK libspdk_bdev_nvme.so 00:02:43.256 CC module/event/subsystems/iobuf/iobuf.o 00:02:43.256 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:43.256 CC module/event/subsystems/vmd/vmd.o 00:02:43.256 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:43.256 CC module/event/subsystems/scheduler/scheduler.o 00:02:43.256 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:43.256 CC module/event/subsystems/sock/sock.o 00:02:43.517 CC module/event/subsystems/fsdev/fsdev.o 00:02:43.517 CC module/event/subsystems/keyring/keyring.o 00:02:43.517 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:43.517 LIB libspdk_event_sock.a 00:02:43.517 LIB libspdk_event_fsdev.a 00:02:43.517 LIB libspdk_event_scheduler.a 00:02:43.517 LIB libspdk_event_vhost_blk.a 00:02:43.517 LIB libspdk_event_vmd.a 00:02:43.517 LIB libspdk_event_iobuf.a 00:02:43.517 LIB libspdk_event_keyring.a 00:02:43.517 LIB libspdk_event_vfu_tgt.a 00:02:43.517 SO libspdk_event_sock.so.5.0 00:02:43.517 SO libspdk_event_fsdev.so.1.0 00:02:43.517 SO libspdk_event_vhost_blk.so.3.0 00:02:43.517 SO libspdk_event_scheduler.so.4.0 00:02:43.517 SO libspdk_event_vmd.so.6.0 00:02:43.517 SO libspdk_event_keyring.so.1.0 00:02:43.517 SO libspdk_event_iobuf.so.3.0 00:02:43.517 SO libspdk_event_vfu_tgt.so.3.0 00:02:43.778 SYMLINK libspdk_event_scheduler.so 00:02:43.778 SYMLINK libspdk_event_fsdev.so 00:02:43.778 SYMLINK libspdk_event_vhost_blk.so 00:02:43.778 SYMLINK libspdk_event_sock.so 00:02:43.778 SYMLINK libspdk_event_keyring.so 00:02:43.778 SYMLINK libspdk_event_vmd.so 00:02:43.778 SYMLINK libspdk_event_vfu_tgt.so 00:02:43.778 SYMLINK libspdk_event_iobuf.so 00:02:44.038 CC module/event/subsystems/accel/accel.o 00:02:44.299 LIB libspdk_event_accel.a 00:02:44.299 SO libspdk_event_accel.so.6.0 00:02:44.299 SYMLINK libspdk_event_accel.so 00:02:44.560 CC module/event/subsystems/bdev/bdev.o 00:02:44.821 LIB libspdk_event_bdev.a 00:02:44.821 SO libspdk_event_bdev.so.6.0 00:02:44.821 SYMLINK libspdk_event_bdev.so 00:02:45.394 CC module/event/subsystems/ublk/ublk.o 00:02:45.394 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:45.394 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:45.394 CC module/event/subsystems/scsi/scsi.o 00:02:45.394 CC module/event/subsystems/nbd/nbd.o 00:02:45.394 LIB libspdk_event_ublk.a 00:02:45.394 LIB libspdk_event_scsi.a 00:02:45.394 LIB libspdk_event_nbd.a 00:02:45.394 SO libspdk_event_scsi.so.6.0 00:02:45.394 SO libspdk_event_ublk.so.3.0 00:02:45.394 SO libspdk_event_nbd.so.6.0 00:02:45.655 SYMLINK libspdk_event_scsi.so 00:02:45.655 LIB libspdk_event_nvmf.a 00:02:45.655 SYMLINK libspdk_event_ublk.so 00:02:45.655 SYMLINK libspdk_event_nbd.so 00:02:45.655 SO libspdk_event_nvmf.so.6.0 00:02:45.655 SYMLINK libspdk_event_nvmf.so 00:02:45.915 CC module/event/subsystems/iscsi/iscsi.o 00:02:45.915 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:45.915 LIB libspdk_event_vhost_scsi.a 00:02:45.915 LIB libspdk_event_iscsi.a 00:02:46.175 SO libspdk_event_vhost_scsi.so.3.0 00:02:46.175 SO libspdk_event_iscsi.so.6.0 00:02:46.175 SYMLINK libspdk_event_vhost_scsi.so 00:02:46.175 SYMLINK libspdk_event_iscsi.so 00:02:46.436 SO libspdk.so.6.0 00:02:46.436 SYMLINK libspdk.so 00:02:46.697 CXX app/trace/trace.o 00:02:46.697 CC app/trace_record/trace_record.o 00:02:46.697 CC app/spdk_nvme_identify/identify.o 00:02:46.697 CC app/spdk_top/spdk_top.o 00:02:46.697 CC app/spdk_lspci/spdk_lspci.o 00:02:46.697 TEST_HEADER include/spdk/accel.h 00:02:46.697 CC test/rpc_client/rpc_client_test.o 00:02:46.697 CC app/spdk_nvme_discover/discovery_aer.o 00:02:46.697 TEST_HEADER include/spdk/accel_module.h 00:02:46.697 TEST_HEADER include/spdk/assert.h 00:02:46.697 TEST_HEADER include/spdk/barrier.h 00:02:46.697 TEST_HEADER include/spdk/base64.h 00:02:46.697 TEST_HEADER include/spdk/bdev.h 00:02:46.697 TEST_HEADER include/spdk/bdev_module.h 00:02:46.697 TEST_HEADER include/spdk/bdev_zone.h 00:02:46.697 TEST_HEADER include/spdk/bit_array.h 00:02:46.697 CC app/spdk_nvme_perf/perf.o 00:02:46.697 TEST_HEADER include/spdk/bit_pool.h 00:02:46.697 TEST_HEADER include/spdk/blob_bdev.h 00:02:46.697 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:46.697 TEST_HEADER include/spdk/blobfs.h 00:02:46.697 TEST_HEADER include/spdk/blob.h 00:02:46.697 TEST_HEADER include/spdk/conf.h 00:02:46.697 TEST_HEADER include/spdk/config.h 00:02:46.697 TEST_HEADER include/spdk/cpuset.h 00:02:46.697 TEST_HEADER include/spdk/crc16.h 00:02:46.697 TEST_HEADER include/spdk/crc32.h 00:02:46.697 TEST_HEADER include/spdk/crc64.h 00:02:46.697 TEST_HEADER include/spdk/dif.h 00:02:46.697 TEST_HEADER include/spdk/dma.h 00:02:46.697 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:46.697 TEST_HEADER include/spdk/endian.h 00:02:46.697 TEST_HEADER include/spdk/env_dpdk.h 00:02:46.697 TEST_HEADER include/spdk/env.h 00:02:46.697 TEST_HEADER include/spdk/event.h 00:02:46.697 TEST_HEADER include/spdk/fd_group.h 00:02:46.697 CC app/iscsi_tgt/iscsi_tgt.o 00:02:46.697 TEST_HEADER include/spdk/fd.h 00:02:46.697 TEST_HEADER include/spdk/file.h 00:02:46.697 TEST_HEADER include/spdk/fsdev.h 00:02:46.697 TEST_HEADER include/spdk/fsdev_module.h 00:02:46.697 CC app/spdk_dd/spdk_dd.o 00:02:46.697 TEST_HEADER include/spdk/ftl.h 00:02:46.697 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:46.697 TEST_HEADER include/spdk/gpt_spec.h 00:02:46.697 TEST_HEADER include/spdk/hexlify.h 00:02:46.697 TEST_HEADER include/spdk/histogram_data.h 00:02:46.697 TEST_HEADER include/spdk/idxd.h 00:02:46.697 TEST_HEADER include/spdk/idxd_spec.h 00:02:46.697 TEST_HEADER include/spdk/ioat.h 00:02:46.697 TEST_HEADER include/spdk/init.h 00:02:46.697 TEST_HEADER include/spdk/ioat_spec.h 00:02:46.697 TEST_HEADER include/spdk/iscsi_spec.h 00:02:46.697 TEST_HEADER include/spdk/json.h 00:02:46.697 TEST_HEADER include/spdk/keyring.h 00:02:46.697 TEST_HEADER include/spdk/jsonrpc.h 00:02:46.697 TEST_HEADER include/spdk/keyring_module.h 00:02:46.697 TEST_HEADER include/spdk/likely.h 00:02:46.697 TEST_HEADER include/spdk/log.h 00:02:46.697 TEST_HEADER include/spdk/lvol.h 00:02:46.697 TEST_HEADER include/spdk/md5.h 00:02:46.697 TEST_HEADER include/spdk/memory.h 00:02:46.697 TEST_HEADER include/spdk/mmio.h 00:02:46.698 TEST_HEADER include/spdk/nbd.h 00:02:46.698 CC app/nvmf_tgt/nvmf_main.o 00:02:46.698 TEST_HEADER include/spdk/net.h 00:02:46.698 TEST_HEADER include/spdk/notify.h 00:02:46.698 TEST_HEADER include/spdk/nvme.h 00:02:46.698 TEST_HEADER include/spdk/nvme_intel.h 00:02:46.698 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:46.698 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:46.698 TEST_HEADER include/spdk/nvme_spec.h 00:02:46.698 TEST_HEADER include/spdk/nvme_zns.h 00:02:46.698 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:46.698 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:46.698 CC app/spdk_tgt/spdk_tgt.o 00:02:46.698 TEST_HEADER include/spdk/nvmf.h 00:02:46.698 TEST_HEADER include/spdk/nvmf_spec.h 00:02:46.698 TEST_HEADER include/spdk/nvmf_transport.h 00:02:46.698 TEST_HEADER include/spdk/opal.h 00:02:46.698 TEST_HEADER include/spdk/pci_ids.h 00:02:46.698 TEST_HEADER include/spdk/opal_spec.h 00:02:46.698 TEST_HEADER include/spdk/pipe.h 00:02:46.698 TEST_HEADER include/spdk/queue.h 00:02:46.698 TEST_HEADER include/spdk/rpc.h 00:02:46.698 TEST_HEADER include/spdk/reduce.h 00:02:46.698 TEST_HEADER include/spdk/scheduler.h 00:02:46.698 TEST_HEADER include/spdk/scsi.h 00:02:46.698 TEST_HEADER include/spdk/sock.h 00:02:46.698 TEST_HEADER include/spdk/scsi_spec.h 00:02:46.698 TEST_HEADER include/spdk/stdinc.h 00:02:46.698 TEST_HEADER include/spdk/string.h 00:02:46.698 TEST_HEADER include/spdk/trace.h 00:02:46.698 TEST_HEADER include/spdk/thread.h 00:02:46.698 TEST_HEADER include/spdk/trace_parser.h 00:02:46.960 TEST_HEADER include/spdk/tree.h 00:02:46.960 TEST_HEADER include/spdk/ublk.h 00:02:46.960 TEST_HEADER include/spdk/util.h 00:02:46.960 TEST_HEADER include/spdk/uuid.h 00:02:46.960 TEST_HEADER include/spdk/version.h 00:02:46.960 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:46.960 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:46.960 TEST_HEADER include/spdk/vhost.h 00:02:46.960 TEST_HEADER include/spdk/vmd.h 00:02:46.960 TEST_HEADER include/spdk/xor.h 00:02:46.960 TEST_HEADER include/spdk/zipf.h 00:02:46.960 CXX test/cpp_headers/accel.o 00:02:46.960 CXX test/cpp_headers/accel_module.o 00:02:46.960 CXX test/cpp_headers/assert.o 00:02:46.960 CXX test/cpp_headers/barrier.o 00:02:46.960 CXX test/cpp_headers/base64.o 00:02:46.960 CXX test/cpp_headers/bdev.o 00:02:46.960 CXX test/cpp_headers/bdev_module.o 00:02:46.960 CXX test/cpp_headers/bdev_zone.o 00:02:46.960 CXX test/cpp_headers/bit_pool.o 00:02:46.960 CXX test/cpp_headers/bit_array.o 00:02:46.960 CXX test/cpp_headers/blob_bdev.o 00:02:46.960 CXX test/cpp_headers/blobfs.o 00:02:46.960 CXX test/cpp_headers/blobfs_bdev.o 00:02:46.960 CXX test/cpp_headers/blob.o 00:02:46.960 CXX test/cpp_headers/conf.o 00:02:46.960 CXX test/cpp_headers/config.o 00:02:46.960 CXX test/cpp_headers/crc16.o 00:02:46.960 CXX test/cpp_headers/cpuset.o 00:02:46.960 CXX test/cpp_headers/crc32.o 00:02:46.960 CXX test/cpp_headers/crc64.o 00:02:46.960 CXX test/cpp_headers/dif.o 00:02:46.960 CXX test/cpp_headers/endian.o 00:02:46.960 CXX test/cpp_headers/dma.o 00:02:46.960 CXX test/cpp_headers/env_dpdk.o 00:02:46.960 CXX test/cpp_headers/event.o 00:02:46.961 CXX test/cpp_headers/env.o 00:02:46.961 CXX test/cpp_headers/fd_group.o 00:02:46.961 CXX test/cpp_headers/fd.o 00:02:46.961 CXX test/cpp_headers/file.o 00:02:46.961 CXX test/cpp_headers/fsdev.o 00:02:46.961 CXX test/cpp_headers/ftl.o 00:02:46.961 CXX test/cpp_headers/fsdev_module.o 00:02:46.961 CXX test/cpp_headers/fuse_dispatcher.o 00:02:46.961 CXX test/cpp_headers/gpt_spec.o 00:02:46.961 CXX test/cpp_headers/hexlify.o 00:02:46.961 CXX test/cpp_headers/idxd_spec.o 00:02:46.961 CXX test/cpp_headers/histogram_data.o 00:02:46.961 CXX test/cpp_headers/ioat.o 00:02:46.961 CXX test/cpp_headers/ioat_spec.o 00:02:46.961 CXX test/cpp_headers/iscsi_spec.o 00:02:46.961 CXX test/cpp_headers/idxd.o 00:02:46.961 CXX test/cpp_headers/init.o 00:02:46.961 CXX test/cpp_headers/json.o 00:02:46.961 CXX test/cpp_headers/jsonrpc.o 00:02:46.961 CXX test/cpp_headers/likely.o 00:02:46.961 CXX test/cpp_headers/keyring.o 00:02:46.961 CXX test/cpp_headers/keyring_module.o 00:02:46.961 CXX test/cpp_headers/lvol.o 00:02:46.961 CXX test/cpp_headers/md5.o 00:02:46.961 CXX test/cpp_headers/log.o 00:02:46.961 CXX test/cpp_headers/memory.o 00:02:46.961 CXX test/cpp_headers/mmio.o 00:02:46.961 CXX test/cpp_headers/net.o 00:02:46.961 CXX test/cpp_headers/nbd.o 00:02:46.961 CXX test/cpp_headers/notify.o 00:02:46.961 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:46.961 CXX test/cpp_headers/nvme.o 00:02:46.961 CXX test/cpp_headers/nvme_intel.o 00:02:46.961 CXX test/cpp_headers/nvme_ocssd.o 00:02:46.961 CXX test/cpp_headers/nvme_spec.o 00:02:46.961 CC test/env/vtophys/vtophys.o 00:02:46.961 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:46.961 CXX test/cpp_headers/nvmf_cmd.o 00:02:46.961 CXX test/cpp_headers/nvme_zns.o 00:02:46.961 CXX test/cpp_headers/opal.o 00:02:46.961 CXX test/cpp_headers/nvmf.o 00:02:46.961 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:46.961 LINK spdk_lspci 00:02:46.961 CXX test/cpp_headers/nvmf_spec.o 00:02:46.961 CXX test/cpp_headers/pci_ids.o 00:02:46.961 CXX test/cpp_headers/opal_spec.o 00:02:46.961 CXX test/cpp_headers/nvmf_transport.o 00:02:46.961 CC examples/ioat/perf/perf.o 00:02:46.961 CC examples/ioat/verify/verify.o 00:02:46.961 CXX test/cpp_headers/pipe.o 00:02:46.961 CXX test/cpp_headers/reduce.o 00:02:46.961 CXX test/cpp_headers/queue.o 00:02:46.961 CC examples/util/zipf/zipf.o 00:02:46.961 CXX test/cpp_headers/rpc.o 00:02:46.961 CC test/env/memory/memory_ut.o 00:02:46.961 CXX test/cpp_headers/scheduler.o 00:02:46.961 CC test/thread/poller_perf/poller_perf.o 00:02:46.961 CXX test/cpp_headers/stdinc.o 00:02:46.961 CXX test/cpp_headers/scsi.o 00:02:46.961 CXX test/cpp_headers/scsi_spec.o 00:02:46.961 CXX test/cpp_headers/string.o 00:02:46.961 CXX test/cpp_headers/sock.o 00:02:46.961 CXX test/cpp_headers/thread.o 00:02:46.961 CXX test/cpp_headers/trace.o 00:02:46.961 CXX test/cpp_headers/trace_parser.o 00:02:46.961 CXX test/cpp_headers/tree.o 00:02:46.961 CC test/app/jsoncat/jsoncat.o 00:02:46.961 CXX test/cpp_headers/ublk.o 00:02:46.961 CXX test/cpp_headers/util.o 00:02:46.961 CXX test/cpp_headers/uuid.o 00:02:46.961 CC test/env/pci/pci_ut.o 00:02:46.961 CXX test/cpp_headers/vfio_user_pci.o 00:02:46.961 CXX test/cpp_headers/version.o 00:02:46.961 CXX test/cpp_headers/vfio_user_spec.o 00:02:46.961 CXX test/cpp_headers/vmd.o 00:02:46.961 CXX test/cpp_headers/vhost.o 00:02:46.961 CXX test/cpp_headers/zipf.o 00:02:46.961 CXX test/cpp_headers/xor.o 00:02:46.961 CC test/app/histogram_perf/histogram_perf.o 00:02:46.961 CC app/fio/nvme/fio_plugin.o 00:02:46.961 CC test/app/stub/stub.o 00:02:46.961 CC test/dma/test_dma/test_dma.o 00:02:46.961 CC test/app/bdev_svc/bdev_svc.o 00:02:46.961 LINK rpc_client_test 00:02:46.961 CC app/fio/bdev/fio_plugin.o 00:02:46.961 LINK interrupt_tgt 00:02:47.225 LINK spdk_nvme_discover 00:02:47.225 LINK iscsi_tgt 00:02:47.225 LINK nvmf_tgt 00:02:47.225 LINK spdk_trace_record 00:02:47.225 CC test/env/mem_callbacks/mem_callbacks.o 00:02:47.225 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:47.225 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:47.225 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:47.225 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:47.487 LINK spdk_tgt 00:02:47.487 LINK poller_perf 00:02:47.487 LINK spdk_trace 00:02:47.487 LINK spdk_dd 00:02:47.487 LINK jsoncat 00:02:47.487 LINK env_dpdk_post_init 00:02:47.487 LINK zipf 00:02:47.487 LINK bdev_svc 00:02:47.747 LINK histogram_perf 00:02:47.747 LINK ioat_perf 00:02:47.747 LINK verify 00:02:47.747 LINK vtophys 00:02:47.747 LINK stub 00:02:47.747 CC app/vhost/vhost.o 00:02:47.747 LINK spdk_top 00:02:47.747 CC test/event/reactor/reactor.o 00:02:47.747 CC test/event/reactor_perf/reactor_perf.o 00:02:47.747 CC test/event/event_perf/event_perf.o 00:02:48.010 LINK spdk_nvme_perf 00:02:48.010 LINK nvme_fuzz 00:02:48.010 CC test/event/app_repeat/app_repeat.o 00:02:48.010 LINK test_dma 00:02:48.010 CC test/event/scheduler/scheduler.o 00:02:48.010 LINK pci_ut 00:02:48.010 LINK vhost_fuzz 00:02:48.010 CC examples/vmd/lsvmd/lsvmd.o 00:02:48.010 CC examples/vmd/led/led.o 00:02:48.010 LINK spdk_nvme 00:02:48.010 LINK vhost 00:02:48.010 CC examples/idxd/perf/perf.o 00:02:48.010 CC examples/sock/hello_world/hello_sock.o 00:02:48.011 LINK reactor 00:02:48.011 LINK event_perf 00:02:48.011 LINK reactor_perf 00:02:48.011 CC examples/thread/thread/thread_ex.o 00:02:48.011 LINK spdk_bdev 00:02:48.011 LINK app_repeat 00:02:48.011 LINK spdk_nvme_identify 00:02:48.272 LINK mem_callbacks 00:02:48.272 LINK scheduler 00:02:48.272 LINK led 00:02:48.272 LINK lsvmd 00:02:48.272 LINK hello_sock 00:02:48.272 LINK idxd_perf 00:02:48.272 LINK thread 00:02:48.534 CC test/nvme/simple_copy/simple_copy.o 00:02:48.534 CC test/nvme/overhead/overhead.o 00:02:48.534 CC test/nvme/err_injection/err_injection.o 00:02:48.534 CC test/nvme/aer/aer.o 00:02:48.534 CC test/nvme/reserve/reserve.o 00:02:48.534 CC test/nvme/reset/reset.o 00:02:48.534 CC test/nvme/cuse/cuse.o 00:02:48.534 CC test/nvme/fdp/fdp.o 00:02:48.534 CC test/nvme/e2edp/nvme_dp.o 00:02:48.534 CC test/nvme/sgl/sgl.o 00:02:48.534 CC test/nvme/fused_ordering/fused_ordering.o 00:02:48.534 CC test/nvme/connect_stress/connect_stress.o 00:02:48.534 CC test/nvme/startup/startup.o 00:02:48.534 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:48.534 CC test/nvme/compliance/nvme_compliance.o 00:02:48.534 CC test/nvme/boot_partition/boot_partition.o 00:02:48.534 CC test/blobfs/mkfs/mkfs.o 00:02:48.534 CC test/accel/dif/dif.o 00:02:48.534 LINK memory_ut 00:02:48.534 CC test/lvol/esnap/esnap.o 00:02:48.794 LINK err_injection 00:02:48.794 LINK startup 00:02:48.794 LINK boot_partition 00:02:48.794 LINK connect_stress 00:02:48.794 LINK fused_ordering 00:02:48.794 LINK simple_copy 00:02:48.794 LINK reserve 00:02:48.794 LINK doorbell_aers 00:02:48.794 LINK mkfs 00:02:48.794 LINK aer 00:02:48.794 LINK sgl 00:02:48.794 LINK overhead 00:02:48.794 LINK reset 00:02:48.794 LINK nvme_dp 00:02:48.794 CC examples/nvme/reconnect/reconnect.o 00:02:48.794 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:48.794 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:48.794 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:48.794 CC examples/nvme/hotplug/hotplug.o 00:02:48.794 CC examples/nvme/abort/abort.o 00:02:48.794 CC examples/nvme/hello_world/hello_world.o 00:02:48.794 CC examples/nvme/arbitration/arbitration.o 00:02:48.794 LINK fdp 00:02:48.794 LINK nvme_compliance 00:02:49.056 LINK iscsi_fuzz 00:02:49.056 CC examples/accel/perf/accel_perf.o 00:02:49.056 CC examples/blob/cli/blobcli.o 00:02:49.056 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:49.056 CC examples/blob/hello_world/hello_blob.o 00:02:49.056 LINK pmr_persistence 00:02:49.056 LINK cmb_copy 00:02:49.056 LINK hello_world 00:02:49.056 LINK hotplug 00:02:49.056 LINK dif 00:02:49.056 LINK arbitration 00:02:49.056 LINK reconnect 00:02:49.318 LINK abort 00:02:49.318 LINK hello_blob 00:02:49.318 LINK nvme_manage 00:02:49.318 LINK hello_fsdev 00:02:49.318 LINK accel_perf 00:02:49.580 LINK blobcli 00:02:49.580 LINK cuse 00:02:49.842 CC test/bdev/bdevio/bdevio.o 00:02:50.104 CC examples/bdev/hello_world/hello_bdev.o 00:02:50.104 CC examples/bdev/bdevperf/bdevperf.o 00:02:50.104 LINK bdevio 00:02:50.365 LINK hello_bdev 00:02:50.937 LINK bdevperf 00:02:51.511 CC examples/nvmf/nvmf/nvmf.o 00:02:51.772 LINK nvmf 00:02:53.158 LINK esnap 00:02:53.419 00:02:53.419 real 0m53.719s 00:02:53.419 user 7m48.223s 00:02:53.419 sys 4m21.776s 00:02:53.419 12:07:27 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:53.419 12:07:27 make -- common/autotest_common.sh@10 -- $ set +x 00:02:53.419 ************************************ 00:02:53.419 END TEST make 00:02:53.419 ************************************ 00:02:53.419 12:07:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:53.419 12:07:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:53.419 12:07:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:53.419 12:07:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.419 12:07:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:53.419 12:07:27 -- pm/common@44 -- $ pid=1316818 00:02:53.419 12:07:27 -- pm/common@50 -- $ kill -TERM 1316818 00:02:53.419 12:07:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.419 12:07:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:53.419 12:07:27 -- pm/common@44 -- $ pid=1316819 00:02:53.419 12:07:27 -- pm/common@50 -- $ kill -TERM 1316819 00:02:53.419 12:07:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.419 12:07:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:53.419 12:07:27 -- pm/common@44 -- $ pid=1316821 00:02:53.419 12:07:27 -- pm/common@50 -- $ kill -TERM 1316821 00:02:53.419 12:07:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.419 12:07:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:53.419 12:07:27 -- pm/common@44 -- $ pid=1316848 00:02:53.419 12:07:27 -- pm/common@50 -- $ sudo -E kill -TERM 1316848 00:02:53.681 12:07:28 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:53.681 12:07:28 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:53.681 12:07:28 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:53.681 12:07:28 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:53.681 12:07:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:53.681 12:07:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:53.681 12:07:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:53.681 12:07:28 -- scripts/common.sh@336 -- # IFS=.-: 00:02:53.681 12:07:28 -- scripts/common.sh@336 -- # read -ra ver1 00:02:53.681 12:07:28 -- scripts/common.sh@337 -- # IFS=.-: 00:02:53.681 12:07:28 -- scripts/common.sh@337 -- # read -ra ver2 00:02:53.681 12:07:28 -- scripts/common.sh@338 -- # local 'op=<' 00:02:53.681 12:07:28 -- scripts/common.sh@340 -- # ver1_l=2 00:02:53.681 12:07:28 -- scripts/common.sh@341 -- # ver2_l=1 00:02:53.681 12:07:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:53.681 12:07:28 -- scripts/common.sh@344 -- # case "$op" in 00:02:53.681 12:07:28 -- scripts/common.sh@345 -- # : 1 00:02:53.681 12:07:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:53.681 12:07:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:53.681 12:07:28 -- scripts/common.sh@365 -- # decimal 1 00:02:53.681 12:07:28 -- scripts/common.sh@353 -- # local d=1 00:02:53.681 12:07:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:53.681 12:07:28 -- scripts/common.sh@355 -- # echo 1 00:02:53.681 12:07:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:53.681 12:07:28 -- scripts/common.sh@366 -- # decimal 2 00:02:53.681 12:07:28 -- scripts/common.sh@353 -- # local d=2 00:02:53.681 12:07:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:53.681 12:07:28 -- scripts/common.sh@355 -- # echo 2 00:02:53.681 12:07:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:53.681 12:07:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:53.681 12:07:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:53.681 12:07:28 -- scripts/common.sh@368 -- # return 0 00:02:53.681 12:07:28 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:53.681 12:07:28 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:53.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.681 --rc genhtml_branch_coverage=1 00:02:53.681 --rc genhtml_function_coverage=1 00:02:53.681 --rc genhtml_legend=1 00:02:53.681 --rc geninfo_all_blocks=1 00:02:53.681 --rc geninfo_unexecuted_blocks=1 00:02:53.681 00:02:53.681 ' 00:02:53.681 12:07:28 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:53.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.681 --rc genhtml_branch_coverage=1 00:02:53.681 --rc genhtml_function_coverage=1 00:02:53.681 --rc genhtml_legend=1 00:02:53.681 --rc geninfo_all_blocks=1 00:02:53.681 --rc geninfo_unexecuted_blocks=1 00:02:53.681 00:02:53.681 ' 00:02:53.681 12:07:28 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:53.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.681 --rc genhtml_branch_coverage=1 00:02:53.681 --rc genhtml_function_coverage=1 00:02:53.681 --rc genhtml_legend=1 00:02:53.681 --rc geninfo_all_blocks=1 00:02:53.681 --rc geninfo_unexecuted_blocks=1 00:02:53.681 00:02:53.681 ' 00:02:53.681 12:07:28 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:53.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.681 --rc genhtml_branch_coverage=1 00:02:53.681 --rc genhtml_function_coverage=1 00:02:53.681 --rc genhtml_legend=1 00:02:53.681 --rc geninfo_all_blocks=1 00:02:53.681 --rc geninfo_unexecuted_blocks=1 00:02:53.681 00:02:53.681 ' 00:02:53.681 12:07:28 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:53.681 12:07:28 -- nvmf/common.sh@7 -- # uname -s 00:02:53.681 12:07:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:53.681 12:07:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:53.681 12:07:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:53.681 12:07:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:53.681 12:07:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:53.681 12:07:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:53.681 12:07:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:53.681 12:07:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:53.681 12:07:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:53.681 12:07:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:53.681 12:07:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:53.681 12:07:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:53.681 12:07:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:53.681 12:07:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:53.681 12:07:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:53.681 12:07:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:53.681 12:07:28 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:53.681 12:07:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:53.681 12:07:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:53.681 12:07:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:53.681 12:07:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:53.681 12:07:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.681 12:07:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.681 12:07:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.681 12:07:28 -- paths/export.sh@5 -- # export PATH 00:02:53.681 12:07:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.681 12:07:28 -- nvmf/common.sh@51 -- # : 0 00:02:53.681 12:07:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:53.681 12:07:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:53.681 12:07:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:53.681 12:07:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:53.681 12:07:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:53.681 12:07:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:53.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:53.681 12:07:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:53.681 12:07:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:53.682 12:07:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:53.682 12:07:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:53.682 12:07:28 -- spdk/autotest.sh@32 -- # uname -s 00:02:53.682 12:07:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:53.682 12:07:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:53.682 12:07:28 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:53.682 12:07:28 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:53.682 12:07:28 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:53.682 12:07:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:53.682 12:07:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:53.682 12:07:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:53.682 12:07:28 -- spdk/autotest.sh@48 -- # udevadm_pid=1382308 00:02:53.682 12:07:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:53.682 12:07:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:53.682 12:07:28 -- pm/common@17 -- # local monitor 00:02:53.682 12:07:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.682 12:07:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.682 12:07:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.682 12:07:28 -- pm/common@21 -- # date +%s 00:02:53.682 12:07:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.682 12:07:28 -- pm/common@21 -- # date +%s 00:02:53.682 12:07:28 -- pm/common@25 -- # sleep 1 00:02:53.682 12:07:28 -- pm/common@21 -- # date +%s 00:02:53.682 12:07:28 -- pm/common@21 -- # date +%s 00:02:53.682 12:07:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730718448 00:02:53.682 12:07:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730718448 00:02:53.682 12:07:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730718448 00:02:53.682 12:07:28 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730718448 00:02:53.682 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730718448_collect-cpu-load.pm.log 00:02:53.682 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730718448_collect-vmstat.pm.log 00:02:53.682 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730718448_collect-cpu-temp.pm.log 00:02:53.682 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730718448_collect-bmc-pm.bmc.pm.log 00:02:54.625 12:07:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:54.625 12:07:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:54.625 12:07:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:54.625 12:07:29 -- common/autotest_common.sh@10 -- # set +x 00:02:54.625 12:07:29 -- spdk/autotest.sh@59 -- # create_test_list 00:02:54.625 12:07:29 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:54.625 12:07:29 -- common/autotest_common.sh@10 -- # set +x 00:02:54.885 12:07:29 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:54.885 12:07:29 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:54.885 12:07:29 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:54.885 12:07:29 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:54.885 12:07:29 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:54.885 12:07:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:54.885 12:07:29 -- common/autotest_common.sh@1455 -- # uname 00:02:54.885 12:07:29 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:54.885 12:07:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:54.885 12:07:29 -- common/autotest_common.sh@1475 -- # uname 00:02:54.885 12:07:29 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:54.885 12:07:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:54.885 12:07:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:54.885 lcov: LCOV version 1.15 00:02:54.885 12:07:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:09.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:09.901 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:24.803 12:07:59 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:24.803 12:07:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:24.803 12:07:59 -- common/autotest_common.sh@10 -- # set +x 00:03:24.803 12:07:59 -- spdk/autotest.sh@78 -- # rm -f 00:03:24.803 12:07:59 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.101 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:28.101 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:28.101 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:28.362 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:28.362 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:28.362 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:28.362 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:28.362 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:28.362 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:28.362 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:28.362 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:28.362 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:28.362 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:28.362 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:28.362 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:28.623 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:28.623 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:28.883 12:08:03 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:28.883 12:08:03 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:28.883 12:08:03 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:28.883 12:08:03 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:28.883 12:08:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:28.883 12:08:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:28.883 12:08:03 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:28.883 12:08:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:28.883 12:08:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:28.883 12:08:03 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:28.883 12:08:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:28.883 12:08:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:28.883 12:08:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:28.883 12:08:03 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:28.884 12:08:03 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:28.884 No valid GPT data, bailing 00:03:28.884 12:08:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:28.884 12:08:03 -- scripts/common.sh@394 -- # pt= 00:03:28.884 12:08:03 -- scripts/common.sh@395 -- # return 1 00:03:28.884 12:08:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:28.884 1+0 records in 00:03:28.884 1+0 records out 00:03:28.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0066401 s, 158 MB/s 00:03:28.884 12:08:03 -- spdk/autotest.sh@105 -- # sync 00:03:28.884 12:08:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:28.884 12:08:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:28.884 12:08:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:38.887 12:08:11 -- spdk/autotest.sh@111 -- # uname -s 00:03:38.887 12:08:11 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:38.887 12:08:11 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:38.887 12:08:11 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:40.799 Hugepages 00:03:40.799 node hugesize free / total 00:03:40.799 node0 1048576kB 0 / 0 00:03:40.799 node0 2048kB 0 / 0 00:03:40.799 node1 1048576kB 0 / 0 00:03:40.799 node1 2048kB 0 / 0 00:03:40.799 00:03:40.799 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:40.799 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:40.799 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:40.799 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:40.799 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:40.799 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:40.799 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:40.799 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:40.799 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:40.799 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:40.799 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:40.799 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:40.799 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:40.799 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:40.799 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:40.799 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:40.799 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:40.799 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:40.799 12:08:15 -- spdk/autotest.sh@117 -- # uname -s 00:03:40.799 12:08:15 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:40.799 12:08:15 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:40.799 12:08:15 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:44.097 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:44.097 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:44.097 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:44.097 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:44.097 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:44.097 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:44.097 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:44.097 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:44.097 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:44.358 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:44.358 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:44.358 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:44.358 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:44.358 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:44.358 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:44.358 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:46.269 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:46.269 12:08:20 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:47.652 12:08:21 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:47.652 12:08:21 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:47.652 12:08:21 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:47.652 12:08:21 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:47.652 12:08:21 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:47.652 12:08:21 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:47.652 12:08:21 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:47.652 12:08:21 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:47.652 12:08:21 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:47.652 12:08:21 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:47.652 12:08:21 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:47.652 12:08:21 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.956 Waiting for block devices as requested 00:03:50.956 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:50.956 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:50.956 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:50.956 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:50.956 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:50.956 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:50.956 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:51.216 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:51.216 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:51.478 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:51.478 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:51.478 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:51.478 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:51.739 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:51.739 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:51.739 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:51.739 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:52.310 12:08:26 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:52.310 12:08:26 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:52.310 12:08:26 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:52.310 12:08:26 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:52.310 12:08:26 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:52.310 12:08:26 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:52.310 12:08:26 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:52.310 12:08:26 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:52.310 12:08:26 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:52.310 12:08:26 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:52.310 12:08:26 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:52.310 12:08:26 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:52.310 12:08:26 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:52.310 12:08:26 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:52.310 12:08:26 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:52.310 12:08:26 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:52.310 12:08:26 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:52.310 12:08:26 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:52.310 12:08:26 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:52.310 12:08:26 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:52.310 12:08:26 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:52.310 12:08:26 -- common/autotest_common.sh@1541 -- # continue 00:03:52.310 12:08:26 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:52.310 12:08:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:52.310 12:08:26 -- common/autotest_common.sh@10 -- # set +x 00:03:52.310 12:08:26 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:52.310 12:08:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:52.310 12:08:26 -- common/autotest_common.sh@10 -- # set +x 00:03:52.310 12:08:26 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.616 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:55.616 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:55.616 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:55.616 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:55.616 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:55.616 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:55.616 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:55.616 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:55.616 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:55.616 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:55.616 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:55.616 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:55.616 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:55.616 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:55.616 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:55.616 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:55.616 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:56.187 12:08:30 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:56.187 12:08:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:56.187 12:08:30 -- common/autotest_common.sh@10 -- # set +x 00:03:56.187 12:08:30 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:56.187 12:08:30 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:56.187 12:08:30 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:56.187 12:08:30 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:56.187 12:08:30 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:56.187 12:08:30 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:56.187 12:08:30 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:56.187 12:08:30 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:56.187 12:08:30 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:56.187 12:08:30 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:56.187 12:08:30 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:56.187 12:08:30 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:56.187 12:08:30 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:56.187 12:08:30 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:56.187 12:08:30 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:56.187 12:08:30 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:56.187 12:08:30 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:56.187 12:08:30 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:03:56.187 12:08:30 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:56.187 12:08:30 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:56.187 12:08:30 -- common/autotest_common.sh@1570 -- # return 0 00:03:56.187 12:08:30 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:56.187 12:08:30 -- common/autotest_common.sh@1578 -- # return 0 00:03:56.187 12:08:30 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:56.187 12:08:30 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:56.188 12:08:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:56.188 12:08:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:56.188 12:08:30 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:56.188 12:08:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:56.188 12:08:30 -- common/autotest_common.sh@10 -- # set +x 00:03:56.188 12:08:30 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:56.188 12:08:30 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:56.188 12:08:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.188 12:08:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.188 12:08:30 -- common/autotest_common.sh@10 -- # set +x 00:03:56.188 ************************************ 00:03:56.188 START TEST env 00:03:56.188 ************************************ 00:03:56.188 12:08:30 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:56.188 * Looking for test storage... 00:03:56.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:56.188 12:08:30 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:56.188 12:08:30 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:56.188 12:08:30 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:56.449 12:08:30 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:56.449 12:08:30 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.449 12:08:30 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.449 12:08:30 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.449 12:08:30 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.449 12:08:30 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.449 12:08:30 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.449 12:08:30 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.449 12:08:30 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.449 12:08:30 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.449 12:08:30 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.449 12:08:30 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.449 12:08:30 env -- scripts/common.sh@344 -- # case "$op" in 00:03:56.449 12:08:30 env -- scripts/common.sh@345 -- # : 1 00:03:56.449 12:08:30 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.449 12:08:30 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.449 12:08:30 env -- scripts/common.sh@365 -- # decimal 1 00:03:56.449 12:08:30 env -- scripts/common.sh@353 -- # local d=1 00:03:56.449 12:08:30 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.449 12:08:30 env -- scripts/common.sh@355 -- # echo 1 00:03:56.449 12:08:30 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.449 12:08:30 env -- scripts/common.sh@366 -- # decimal 2 00:03:56.449 12:08:30 env -- scripts/common.sh@353 -- # local d=2 00:03:56.449 12:08:30 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.449 12:08:30 env -- scripts/common.sh@355 -- # echo 2 00:03:56.449 12:08:30 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.449 12:08:30 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.449 12:08:30 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.449 12:08:30 env -- scripts/common.sh@368 -- # return 0 00:03:56.449 12:08:30 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.449 12:08:30 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:56.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.449 --rc genhtml_branch_coverage=1 00:03:56.449 --rc genhtml_function_coverage=1 00:03:56.449 --rc genhtml_legend=1 00:03:56.449 --rc geninfo_all_blocks=1 00:03:56.449 --rc geninfo_unexecuted_blocks=1 00:03:56.449 00:03:56.449 ' 00:03:56.449 12:08:30 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:56.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.449 --rc genhtml_branch_coverage=1 00:03:56.449 --rc genhtml_function_coverage=1 00:03:56.449 --rc genhtml_legend=1 00:03:56.449 --rc geninfo_all_blocks=1 00:03:56.449 --rc geninfo_unexecuted_blocks=1 00:03:56.449 00:03:56.449 ' 00:03:56.449 12:08:30 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:56.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.449 --rc genhtml_branch_coverage=1 00:03:56.449 --rc genhtml_function_coverage=1 00:03:56.449 --rc genhtml_legend=1 00:03:56.449 --rc geninfo_all_blocks=1 00:03:56.449 --rc geninfo_unexecuted_blocks=1 00:03:56.449 00:03:56.449 ' 00:03:56.449 12:08:30 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:56.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.449 --rc genhtml_branch_coverage=1 00:03:56.449 --rc genhtml_function_coverage=1 00:03:56.449 --rc genhtml_legend=1 00:03:56.449 --rc geninfo_all_blocks=1 00:03:56.449 --rc geninfo_unexecuted_blocks=1 00:03:56.449 00:03:56.449 ' 00:03:56.449 12:08:30 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:56.449 12:08:30 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.449 12:08:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.449 12:08:30 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.449 ************************************ 00:03:56.449 START TEST env_memory 00:03:56.449 ************************************ 00:03:56.449 12:08:30 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:56.449 00:03:56.449 00:03:56.449 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.449 http://cunit.sourceforge.net/ 00:03:56.449 00:03:56.449 00:03:56.449 Suite: memory 00:03:56.449 Test: alloc and free memory map ...[2024-11-04 12:08:30.926905] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:56.449 passed 00:03:56.449 Test: mem map translation ...[2024-11-04 12:08:30.952404] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:56.449 [2024-11-04 12:08:30.952428] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:56.449 [2024-11-04 12:08:30.952475] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:56.449 [2024-11-04 12:08:30.952482] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:56.449 passed 00:03:56.449 Test: mem map registration ...[2024-11-04 12:08:31.007857] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:56.450 [2024-11-04 12:08:31.007879] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:56.712 passed 00:03:56.712 Test: mem map adjacent registrations ...passed 00:03:56.712 00:03:56.712 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.712 suites 1 1 n/a 0 0 00:03:56.712 tests 4 4 4 0 0 00:03:56.712 asserts 152 152 152 0 n/a 00:03:56.712 00:03:56.712 Elapsed time = 0.202 seconds 00:03:56.712 00:03:56.712 real 0m0.218s 00:03:56.712 user 0m0.201s 00:03:56.712 sys 0m0.015s 00:03:56.712 12:08:31 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:56.712 12:08:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:56.712 ************************************ 00:03:56.712 END TEST env_memory 00:03:56.712 ************************************ 00:03:56.712 12:08:31 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:56.712 12:08:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.712 12:08:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.712 12:08:31 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.712 ************************************ 00:03:56.712 START TEST env_vtophys 00:03:56.712 ************************************ 00:03:56.712 12:08:31 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:56.712 EAL: lib.eal log level changed from notice to debug 00:03:56.712 EAL: Detected lcore 0 as core 0 on socket 0 00:03:56.712 EAL: Detected lcore 1 as core 1 on socket 0 00:03:56.712 EAL: Detected lcore 2 as core 2 on socket 0 00:03:56.712 EAL: Detected lcore 3 as core 3 on socket 0 00:03:56.712 EAL: Detected lcore 4 as core 4 on socket 0 00:03:56.712 EAL: Detected lcore 5 as core 5 on socket 0 00:03:56.712 EAL: Detected lcore 6 as core 6 on socket 0 00:03:56.712 EAL: Detected lcore 7 as core 7 on socket 0 00:03:56.712 EAL: Detected lcore 8 as core 8 on socket 0 00:03:56.712 EAL: Detected lcore 9 as core 9 on socket 0 00:03:56.712 EAL: Detected lcore 10 as core 10 on socket 0 00:03:56.712 EAL: Detected lcore 11 as core 11 on socket 0 00:03:56.712 EAL: Detected lcore 12 as core 12 on socket 0 00:03:56.712 EAL: Detected lcore 13 as core 13 on socket 0 00:03:56.712 EAL: Detected lcore 14 as core 14 on socket 0 00:03:56.712 EAL: Detected lcore 15 as core 15 on socket 0 00:03:56.712 EAL: Detected lcore 16 as core 16 on socket 0 00:03:56.712 EAL: Detected lcore 17 as core 17 on socket 0 00:03:56.712 EAL: Detected lcore 18 as core 18 on socket 0 00:03:56.712 EAL: Detected lcore 19 as core 19 on socket 0 00:03:56.712 EAL: Detected lcore 20 as core 20 on socket 0 00:03:56.712 EAL: Detected lcore 21 as core 21 on socket 0 00:03:56.712 EAL: Detected lcore 22 as core 22 on socket 0 00:03:56.712 EAL: Detected lcore 23 as core 23 on socket 0 00:03:56.712 EAL: Detected lcore 24 as core 24 on socket 0 00:03:56.712 EAL: Detected lcore 25 as core 25 on socket 0 00:03:56.712 EAL: Detected lcore 26 as core 26 on socket 0 00:03:56.712 EAL: Detected lcore 27 as core 27 on socket 0 00:03:56.712 EAL: Detected lcore 28 as core 28 on socket 0 00:03:56.712 EAL: Detected lcore 29 as core 29 on socket 0 00:03:56.712 EAL: Detected lcore 30 as core 30 on socket 0 00:03:56.712 EAL: Detected lcore 31 as core 31 on socket 0 00:03:56.712 EAL: Detected lcore 32 as core 32 on socket 0 00:03:56.712 EAL: Detected lcore 33 as core 33 on socket 0 00:03:56.712 EAL: Detected lcore 34 as core 34 on socket 0 00:03:56.712 EAL: Detected lcore 35 as core 35 on socket 0 00:03:56.712 EAL: Detected lcore 36 as core 0 on socket 1 00:03:56.712 EAL: Detected lcore 37 as core 1 on socket 1 00:03:56.712 EAL: Detected lcore 38 as core 2 on socket 1 00:03:56.712 EAL: Detected lcore 39 as core 3 on socket 1 00:03:56.712 EAL: Detected lcore 40 as core 4 on socket 1 00:03:56.712 EAL: Detected lcore 41 as core 5 on socket 1 00:03:56.712 EAL: Detected lcore 42 as core 6 on socket 1 00:03:56.712 EAL: Detected lcore 43 as core 7 on socket 1 00:03:56.712 EAL: Detected lcore 44 as core 8 on socket 1 00:03:56.712 EAL: Detected lcore 45 as core 9 on socket 1 00:03:56.712 EAL: Detected lcore 46 as core 10 on socket 1 00:03:56.712 EAL: Detected lcore 47 as core 11 on socket 1 00:03:56.712 EAL: Detected lcore 48 as core 12 on socket 1 00:03:56.712 EAL: Detected lcore 49 as core 13 on socket 1 00:03:56.712 EAL: Detected lcore 50 as core 14 on socket 1 00:03:56.712 EAL: Detected lcore 51 as core 15 on socket 1 00:03:56.712 EAL: Detected lcore 52 as core 16 on socket 1 00:03:56.712 EAL: Detected lcore 53 as core 17 on socket 1 00:03:56.712 EAL: Detected lcore 54 as core 18 on socket 1 00:03:56.712 EAL: Detected lcore 55 as core 19 on socket 1 00:03:56.712 EAL: Detected lcore 56 as core 20 on socket 1 00:03:56.712 EAL: Detected lcore 57 as core 21 on socket 1 00:03:56.712 EAL: Detected lcore 58 as core 22 on socket 1 00:03:56.712 EAL: Detected lcore 59 as core 23 on socket 1 00:03:56.712 EAL: Detected lcore 60 as core 24 on socket 1 00:03:56.712 EAL: Detected lcore 61 as core 25 on socket 1 00:03:56.712 EAL: Detected lcore 62 as core 26 on socket 1 00:03:56.712 EAL: Detected lcore 63 as core 27 on socket 1 00:03:56.712 EAL: Detected lcore 64 as core 28 on socket 1 00:03:56.712 EAL: Detected lcore 65 as core 29 on socket 1 00:03:56.712 EAL: Detected lcore 66 as core 30 on socket 1 00:03:56.712 EAL: Detected lcore 67 as core 31 on socket 1 00:03:56.712 EAL: Detected lcore 68 as core 32 on socket 1 00:03:56.712 EAL: Detected lcore 69 as core 33 on socket 1 00:03:56.712 EAL: Detected lcore 70 as core 34 on socket 1 00:03:56.712 EAL: Detected lcore 71 as core 35 on socket 1 00:03:56.712 EAL: Detected lcore 72 as core 0 on socket 0 00:03:56.712 EAL: Detected lcore 73 as core 1 on socket 0 00:03:56.712 EAL: Detected lcore 74 as core 2 on socket 0 00:03:56.712 EAL: Detected lcore 75 as core 3 on socket 0 00:03:56.712 EAL: Detected lcore 76 as core 4 on socket 0 00:03:56.712 EAL: Detected lcore 77 as core 5 on socket 0 00:03:56.712 EAL: Detected lcore 78 as core 6 on socket 0 00:03:56.712 EAL: Detected lcore 79 as core 7 on socket 0 00:03:56.712 EAL: Detected lcore 80 as core 8 on socket 0 00:03:56.712 EAL: Detected lcore 81 as core 9 on socket 0 00:03:56.712 EAL: Detected lcore 82 as core 10 on socket 0 00:03:56.712 EAL: Detected lcore 83 as core 11 on socket 0 00:03:56.712 EAL: Detected lcore 84 as core 12 on socket 0 00:03:56.712 EAL: Detected lcore 85 as core 13 on socket 0 00:03:56.712 EAL: Detected lcore 86 as core 14 on socket 0 00:03:56.712 EAL: Detected lcore 87 as core 15 on socket 0 00:03:56.712 EAL: Detected lcore 88 as core 16 on socket 0 00:03:56.712 EAL: Detected lcore 89 as core 17 on socket 0 00:03:56.712 EAL: Detected lcore 90 as core 18 on socket 0 00:03:56.712 EAL: Detected lcore 91 as core 19 on socket 0 00:03:56.712 EAL: Detected lcore 92 as core 20 on socket 0 00:03:56.712 EAL: Detected lcore 93 as core 21 on socket 0 00:03:56.712 EAL: Detected lcore 94 as core 22 on socket 0 00:03:56.712 EAL: Detected lcore 95 as core 23 on socket 0 00:03:56.712 EAL: Detected lcore 96 as core 24 on socket 0 00:03:56.712 EAL: Detected lcore 97 as core 25 on socket 0 00:03:56.712 EAL: Detected lcore 98 as core 26 on socket 0 00:03:56.712 EAL: Detected lcore 99 as core 27 on socket 0 00:03:56.712 EAL: Detected lcore 100 as core 28 on socket 0 00:03:56.712 EAL: Detected lcore 101 as core 29 on socket 0 00:03:56.712 EAL: Detected lcore 102 as core 30 on socket 0 00:03:56.712 EAL: Detected lcore 103 as core 31 on socket 0 00:03:56.712 EAL: Detected lcore 104 as core 32 on socket 0 00:03:56.712 EAL: Detected lcore 105 as core 33 on socket 0 00:03:56.712 EAL: Detected lcore 106 as core 34 on socket 0 00:03:56.712 EAL: Detected lcore 107 as core 35 on socket 0 00:03:56.712 EAL: Detected lcore 108 as core 0 on socket 1 00:03:56.712 EAL: Detected lcore 109 as core 1 on socket 1 00:03:56.713 EAL: Detected lcore 110 as core 2 on socket 1 00:03:56.713 EAL: Detected lcore 111 as core 3 on socket 1 00:03:56.713 EAL: Detected lcore 112 as core 4 on socket 1 00:03:56.713 EAL: Detected lcore 113 as core 5 on socket 1 00:03:56.713 EAL: Detected lcore 114 as core 6 on socket 1 00:03:56.713 EAL: Detected lcore 115 as core 7 on socket 1 00:03:56.713 EAL: Detected lcore 116 as core 8 on socket 1 00:03:56.713 EAL: Detected lcore 117 as core 9 on socket 1 00:03:56.713 EAL: Detected lcore 118 as core 10 on socket 1 00:03:56.713 EAL: Detected lcore 119 as core 11 on socket 1 00:03:56.713 EAL: Detected lcore 120 as core 12 on socket 1 00:03:56.713 EAL: Detected lcore 121 as core 13 on socket 1 00:03:56.713 EAL: Detected lcore 122 as core 14 on socket 1 00:03:56.713 EAL: Detected lcore 123 as core 15 on socket 1 00:03:56.713 EAL: Detected lcore 124 as core 16 on socket 1 00:03:56.713 EAL: Detected lcore 125 as core 17 on socket 1 00:03:56.713 EAL: Detected lcore 126 as core 18 on socket 1 00:03:56.713 EAL: Detected lcore 127 as core 19 on socket 1 00:03:56.713 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:56.713 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:56.713 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:56.713 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:56.713 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:56.713 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:56.713 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:56.713 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:56.713 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:56.713 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:56.713 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:56.713 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:56.713 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:56.713 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:56.713 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:56.713 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:56.713 EAL: Maximum logical cores by configuration: 128 00:03:56.713 EAL: Detected CPU lcores: 128 00:03:56.713 EAL: Detected NUMA nodes: 2 00:03:56.713 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:56.713 EAL: Detected shared linkage of DPDK 00:03:56.713 EAL: No shared files mode enabled, IPC will be disabled 00:03:56.713 EAL: Bus pci wants IOVA as 'DC' 00:03:56.713 EAL: Buses did not request a specific IOVA mode. 00:03:56.713 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:56.713 EAL: Selected IOVA mode 'VA' 00:03:56.713 EAL: Probing VFIO support... 00:03:56.713 EAL: IOMMU type 1 (Type 1) is supported 00:03:56.713 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:56.713 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:56.713 EAL: VFIO support initialized 00:03:56.713 EAL: Ask a virtual area of 0x2e000 bytes 00:03:56.713 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:56.713 EAL: Setting up physically contiguous memory... 00:03:56.713 EAL: Setting maximum number of open files to 524288 00:03:56.713 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:56.713 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:56.713 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:56.713 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.713 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:56.713 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.713 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.713 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:56.713 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:56.713 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.713 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:56.713 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.713 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.713 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:56.713 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:56.713 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.713 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:56.713 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.713 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.713 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:56.713 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:56.713 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.713 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:56.713 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.713 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.713 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:56.713 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:56.713 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:56.713 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.713 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:56.713 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.713 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.713 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:56.713 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:56.713 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.713 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:56.713 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.713 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.713 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:56.713 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:56.713 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.713 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:56.713 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.713 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.713 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:56.713 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:56.713 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.713 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:56.713 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.713 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.713 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:56.713 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:56.713 EAL: Hugepages will be freed exactly as allocated. 00:03:56.713 EAL: No shared files mode enabled, IPC is disabled 00:03:56.713 EAL: No shared files mode enabled, IPC is disabled 00:03:56.713 EAL: TSC frequency is ~2400000 KHz 00:03:56.713 EAL: Main lcore 0 is ready (tid=7f47a6afaa00;cpuset=[0]) 00:03:56.713 EAL: Trying to obtain current memory policy. 00:03:56.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.713 EAL: Restoring previous memory policy: 0 00:03:56.713 EAL: request: mp_malloc_sync 00:03:56.713 EAL: No shared files mode enabled, IPC is disabled 00:03:56.713 EAL: Heap on socket 0 was expanded by 2MB 00:03:56.713 EAL: No shared files mode enabled, IPC is disabled 00:03:56.713 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:56.713 EAL: Mem event callback 'spdk:(nil)' registered 00:03:56.713 00:03:56.713 00:03:56.713 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.713 http://cunit.sourceforge.net/ 00:03:56.713 00:03:56.713 00:03:56.713 Suite: components_suite 00:03:56.713 Test: vtophys_malloc_test ...passed 00:03:56.713 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:56.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.713 EAL: Restoring previous memory policy: 4 00:03:56.713 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.713 EAL: request: mp_malloc_sync 00:03:56.713 EAL: No shared files mode enabled, IPC is disabled 00:03:56.713 EAL: Heap on socket 0 was expanded by 4MB 00:03:56.713 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.713 EAL: request: mp_malloc_sync 00:03:56.713 EAL: No shared files mode enabled, IPC is disabled 00:03:56.713 EAL: Heap on socket 0 was shrunk by 4MB 00:03:56.713 EAL: Trying to obtain current memory policy. 00:03:56.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.713 EAL: Restoring previous memory policy: 4 00:03:56.713 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.713 EAL: request: mp_malloc_sync 00:03:56.713 EAL: No shared files mode enabled, IPC is disabled 00:03:56.713 EAL: Heap on socket 0 was expanded by 6MB 00:03:56.713 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.713 EAL: request: mp_malloc_sync 00:03:56.713 EAL: No shared files mode enabled, IPC is disabled 00:03:56.713 EAL: Heap on socket 0 was shrunk by 6MB 00:03:56.713 EAL: Trying to obtain current memory policy. 00:03:56.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.713 EAL: Restoring previous memory policy: 4 00:03:56.713 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.713 EAL: request: mp_malloc_sync 00:03:56.713 EAL: No shared files mode enabled, IPC is disabled 00:03:56.713 EAL: Heap on socket 0 was expanded by 10MB 00:03:56.713 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.713 EAL: request: mp_malloc_sync 00:03:56.713 EAL: No shared files mode enabled, IPC is disabled 00:03:56.713 EAL: Heap on socket 0 was shrunk by 10MB 00:03:56.713 EAL: Trying to obtain current memory policy. 00:03:56.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.713 EAL: Restoring previous memory policy: 4 00:03:56.713 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.713 EAL: request: mp_malloc_sync 00:03:56.713 EAL: No shared files mode enabled, IPC is disabled 00:03:56.713 EAL: Heap on socket 0 was expanded by 18MB 00:03:56.713 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.713 EAL: request: mp_malloc_sync 00:03:56.713 EAL: No shared files mode enabled, IPC is disabled 00:03:56.713 EAL: Heap on socket 0 was shrunk by 18MB 00:03:56.713 EAL: Trying to obtain current memory policy. 00:03:56.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.713 EAL: Restoring previous memory policy: 4 00:03:56.713 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.713 EAL: request: mp_malloc_sync 00:03:56.713 EAL: No shared files mode enabled, IPC is disabled 00:03:56.713 EAL: Heap on socket 0 was expanded by 34MB 00:03:56.713 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.713 EAL: request: mp_malloc_sync 00:03:56.713 EAL: No shared files mode enabled, IPC is disabled 00:03:56.713 EAL: Heap on socket 0 was shrunk by 34MB 00:03:56.713 EAL: Trying to obtain current memory policy. 00:03:56.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.974 EAL: Restoring previous memory policy: 4 00:03:56.975 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.975 EAL: request: mp_malloc_sync 00:03:56.975 EAL: No shared files mode enabled, IPC is disabled 00:03:56.975 EAL: Heap on socket 0 was expanded by 66MB 00:03:56.975 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.975 EAL: request: mp_malloc_sync 00:03:56.975 EAL: No shared files mode enabled, IPC is disabled 00:03:56.975 EAL: Heap on socket 0 was shrunk by 66MB 00:03:56.975 EAL: Trying to obtain current memory policy. 00:03:56.975 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.975 EAL: Restoring previous memory policy: 4 00:03:56.975 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.975 EAL: request: mp_malloc_sync 00:03:56.975 EAL: No shared files mode enabled, IPC is disabled 00:03:56.975 EAL: Heap on socket 0 was expanded by 130MB 00:03:56.975 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.975 EAL: request: mp_malloc_sync 00:03:56.975 EAL: No shared files mode enabled, IPC is disabled 00:03:56.975 EAL: Heap on socket 0 was shrunk by 130MB 00:03:56.975 EAL: Trying to obtain current memory policy. 00:03:56.975 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.975 EAL: Restoring previous memory policy: 4 00:03:56.975 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.975 EAL: request: mp_malloc_sync 00:03:56.975 EAL: No shared files mode enabled, IPC is disabled 00:03:56.975 EAL: Heap on socket 0 was expanded by 258MB 00:03:56.975 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.975 EAL: request: mp_malloc_sync 00:03:56.975 EAL: No shared files mode enabled, IPC is disabled 00:03:56.975 EAL: Heap on socket 0 was shrunk by 258MB 00:03:56.975 EAL: Trying to obtain current memory policy. 00:03:56.975 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.975 EAL: Restoring previous memory policy: 4 00:03:56.975 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.975 EAL: request: mp_malloc_sync 00:03:56.975 EAL: No shared files mode enabled, IPC is disabled 00:03:56.975 EAL: Heap on socket 0 was expanded by 514MB 00:03:57.236 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.236 EAL: request: mp_malloc_sync 00:03:57.236 EAL: No shared files mode enabled, IPC is disabled 00:03:57.236 EAL: Heap on socket 0 was shrunk by 514MB 00:03:57.236 EAL: Trying to obtain current memory policy. 00:03:57.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.236 EAL: Restoring previous memory policy: 4 00:03:57.236 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.236 EAL: request: mp_malloc_sync 00:03:57.236 EAL: No shared files mode enabled, IPC is disabled 00:03:57.236 EAL: Heap on socket 0 was expanded by 1026MB 00:03:57.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.497 EAL: request: mp_malloc_sync 00:03:57.497 EAL: No shared files mode enabled, IPC is disabled 00:03:57.497 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:57.497 passed 00:03:57.497 00:03:57.497 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.497 suites 1 1 n/a 0 0 00:03:57.497 tests 2 2 2 0 0 00:03:57.497 asserts 497 497 497 0 n/a 00:03:57.497 00:03:57.497 Elapsed time = 0.658 seconds 00:03:57.497 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.497 EAL: request: mp_malloc_sync 00:03:57.497 EAL: No shared files mode enabled, IPC is disabled 00:03:57.497 EAL: Heap on socket 0 was shrunk by 2MB 00:03:57.497 EAL: No shared files mode enabled, IPC is disabled 00:03:57.497 EAL: No shared files mode enabled, IPC is disabled 00:03:57.497 EAL: No shared files mode enabled, IPC is disabled 00:03:57.497 00:03:57.497 real 0m0.777s 00:03:57.497 user 0m0.405s 00:03:57.497 sys 0m0.348s 00:03:57.497 12:08:31 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:57.497 12:08:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:57.497 ************************************ 00:03:57.497 END TEST env_vtophys 00:03:57.497 ************************************ 00:03:57.497 12:08:31 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:57.497 12:08:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:57.497 12:08:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:57.497 12:08:31 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.497 ************************************ 00:03:57.497 START TEST env_pci 00:03:57.497 ************************************ 00:03:57.497 12:08:32 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:57.497 00:03:57.497 00:03:57.497 CUnit - A unit testing framework for C - Version 2.1-3 00:03:57.497 http://cunit.sourceforge.net/ 00:03:57.497 00:03:57.497 00:03:57.497 Suite: pci 00:03:57.497 Test: pci_hook ...[2024-11-04 12:08:32.035144] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1401720 has claimed it 00:03:57.758 EAL: Cannot find device (10000:00:01.0) 00:03:57.758 EAL: Failed to attach device on primary process 00:03:57.758 passed 00:03:57.758 00:03:57.758 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.758 suites 1 1 n/a 0 0 00:03:57.758 tests 1 1 1 0 0 00:03:57.758 asserts 25 25 25 0 n/a 00:03:57.758 00:03:57.758 Elapsed time = 0.030 seconds 00:03:57.758 00:03:57.758 real 0m0.051s 00:03:57.758 user 0m0.017s 00:03:57.758 sys 0m0.033s 00:03:57.758 12:08:32 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:57.758 12:08:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:57.758 ************************************ 00:03:57.758 END TEST env_pci 00:03:57.758 ************************************ 00:03:57.758 12:08:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:57.758 12:08:32 env -- env/env.sh@15 -- # uname 00:03:57.758 12:08:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:57.758 12:08:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:57.758 12:08:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:57.758 12:08:32 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:57.758 12:08:32 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:57.758 12:08:32 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.758 ************************************ 00:03:57.758 START TEST env_dpdk_post_init 00:03:57.758 ************************************ 00:03:57.758 12:08:32 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:57.758 EAL: Detected CPU lcores: 128 00:03:57.758 EAL: Detected NUMA nodes: 2 00:03:57.758 EAL: Detected shared linkage of DPDK 00:03:57.758 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:57.758 EAL: Selected IOVA mode 'VA' 00:03:57.758 EAL: VFIO support initialized 00:03:57.758 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:57.758 EAL: Using IOMMU type 1 (Type 1) 00:03:58.018 EAL: Ignore mapping IO port bar(1) 00:03:58.018 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:58.278 EAL: Ignore mapping IO port bar(1) 00:03:58.278 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:58.278 EAL: Ignore mapping IO port bar(1) 00:03:58.538 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:58.538 EAL: Ignore mapping IO port bar(1) 00:03:58.799 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:58.799 EAL: Ignore mapping IO port bar(1) 00:03:59.059 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:59.059 EAL: Ignore mapping IO port bar(1) 00:03:59.059 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:59.318 EAL: Ignore mapping IO port bar(1) 00:03:59.318 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:59.578 EAL: Ignore mapping IO port bar(1) 00:03:59.578 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:59.838 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:59.838 EAL: Ignore mapping IO port bar(1) 00:04:00.098 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:00.098 EAL: Ignore mapping IO port bar(1) 00:04:00.357 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:00.357 EAL: Ignore mapping IO port bar(1) 00:04:00.617 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:00.617 EAL: Ignore mapping IO port bar(1) 00:04:00.617 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:00.877 EAL: Ignore mapping IO port bar(1) 00:04:00.877 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:01.137 EAL: Ignore mapping IO port bar(1) 00:04:01.137 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:01.396 EAL: Ignore mapping IO port bar(1) 00:04:01.396 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:01.396 EAL: Ignore mapping IO port bar(1) 00:04:01.657 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:01.657 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:01.657 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:01.657 Starting DPDK initialization... 00:04:01.657 Starting SPDK post initialization... 00:04:01.658 SPDK NVMe probe 00:04:01.658 Attaching to 0000:65:00.0 00:04:01.658 Attached to 0000:65:00.0 00:04:01.658 Cleaning up... 00:04:03.570 00:04:03.570 real 0m5.710s 00:04:03.570 user 0m0.091s 00:04:03.570 sys 0m0.166s 00:04:03.570 12:08:37 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.570 12:08:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:03.570 ************************************ 00:04:03.570 END TEST env_dpdk_post_init 00:04:03.570 ************************************ 00:04:03.570 12:08:37 env -- env/env.sh@26 -- # uname 00:04:03.570 12:08:37 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:03.570 12:08:37 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:03.570 12:08:37 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.570 12:08:37 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.570 12:08:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.570 ************************************ 00:04:03.570 START TEST env_mem_callbacks 00:04:03.570 ************************************ 00:04:03.570 12:08:37 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:03.570 EAL: Detected CPU lcores: 128 00:04:03.570 EAL: Detected NUMA nodes: 2 00:04:03.570 EAL: Detected shared linkage of DPDK 00:04:03.570 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:03.570 EAL: Selected IOVA mode 'VA' 00:04:03.570 EAL: VFIO support initialized 00:04:03.570 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:03.570 00:04:03.570 00:04:03.570 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.570 http://cunit.sourceforge.net/ 00:04:03.570 00:04:03.570 00:04:03.570 Suite: memory 00:04:03.570 Test: test ... 00:04:03.570 register 0x200000200000 2097152 00:04:03.570 malloc 3145728 00:04:03.570 register 0x200000400000 4194304 00:04:03.570 buf 0x200000500000 len 3145728 PASSED 00:04:03.570 malloc 64 00:04:03.570 buf 0x2000004fff40 len 64 PASSED 00:04:03.570 malloc 4194304 00:04:03.570 register 0x200000800000 6291456 00:04:03.570 buf 0x200000a00000 len 4194304 PASSED 00:04:03.570 free 0x200000500000 3145728 00:04:03.570 free 0x2000004fff40 64 00:04:03.570 unregister 0x200000400000 4194304 PASSED 00:04:03.570 free 0x200000a00000 4194304 00:04:03.570 unregister 0x200000800000 6291456 PASSED 00:04:03.570 malloc 8388608 00:04:03.570 register 0x200000400000 10485760 00:04:03.570 buf 0x200000600000 len 8388608 PASSED 00:04:03.570 free 0x200000600000 8388608 00:04:03.570 unregister 0x200000400000 10485760 PASSED 00:04:03.570 passed 00:04:03.570 00:04:03.570 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.570 suites 1 1 n/a 0 0 00:04:03.570 tests 1 1 1 0 0 00:04:03.570 asserts 15 15 15 0 n/a 00:04:03.570 00:04:03.570 Elapsed time = 0.008 seconds 00:04:03.570 00:04:03.570 real 0m0.065s 00:04:03.570 user 0m0.022s 00:04:03.570 sys 0m0.042s 00:04:03.570 12:08:38 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.570 12:08:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:03.570 ************************************ 00:04:03.570 END TEST env_mem_callbacks 00:04:03.570 ************************************ 00:04:03.570 00:04:03.570 real 0m7.407s 00:04:03.570 user 0m0.999s 00:04:03.570 sys 0m0.963s 00:04:03.570 12:08:38 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.570 12:08:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.570 ************************************ 00:04:03.570 END TEST env 00:04:03.570 ************************************ 00:04:03.570 12:08:38 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:03.570 12:08:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.570 12:08:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.570 12:08:38 -- common/autotest_common.sh@10 -- # set +x 00:04:03.570 ************************************ 00:04:03.570 START TEST rpc 00:04:03.570 ************************************ 00:04:03.570 12:08:38 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:03.831 * Looking for test storage... 00:04:03.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:03.831 12:08:38 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:03.831 12:08:38 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:03.831 12:08:38 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:03.831 12:08:38 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:03.831 12:08:38 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.831 12:08:38 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.831 12:08:38 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.831 12:08:38 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.831 12:08:38 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.831 12:08:38 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.831 12:08:38 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.831 12:08:38 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.831 12:08:38 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.831 12:08:38 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.831 12:08:38 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.831 12:08:38 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:03.831 12:08:38 rpc -- scripts/common.sh@345 -- # : 1 00:04:03.831 12:08:38 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.831 12:08:38 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.831 12:08:38 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:03.831 12:08:38 rpc -- scripts/common.sh@353 -- # local d=1 00:04:03.831 12:08:38 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.831 12:08:38 rpc -- scripts/common.sh@355 -- # echo 1 00:04:03.831 12:08:38 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.831 12:08:38 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:03.831 12:08:38 rpc -- scripts/common.sh@353 -- # local d=2 00:04:03.831 12:08:38 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.831 12:08:38 rpc -- scripts/common.sh@355 -- # echo 2 00:04:03.831 12:08:38 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.831 12:08:38 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.831 12:08:38 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.831 12:08:38 rpc -- scripts/common.sh@368 -- # return 0 00:04:03.831 12:08:38 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.831 12:08:38 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:03.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.832 --rc genhtml_branch_coverage=1 00:04:03.832 --rc genhtml_function_coverage=1 00:04:03.832 --rc genhtml_legend=1 00:04:03.832 --rc geninfo_all_blocks=1 00:04:03.832 --rc geninfo_unexecuted_blocks=1 00:04:03.832 00:04:03.832 ' 00:04:03.832 12:08:38 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:03.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.832 --rc genhtml_branch_coverage=1 00:04:03.832 --rc genhtml_function_coverage=1 00:04:03.832 --rc genhtml_legend=1 00:04:03.832 --rc geninfo_all_blocks=1 00:04:03.832 --rc geninfo_unexecuted_blocks=1 00:04:03.832 00:04:03.832 ' 00:04:03.832 12:08:38 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:03.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.832 --rc genhtml_branch_coverage=1 00:04:03.832 --rc genhtml_function_coverage=1 00:04:03.832 --rc genhtml_legend=1 00:04:03.832 --rc geninfo_all_blocks=1 00:04:03.832 --rc geninfo_unexecuted_blocks=1 00:04:03.832 00:04:03.832 ' 00:04:03.832 12:08:38 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:03.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.832 --rc genhtml_branch_coverage=1 00:04:03.832 --rc genhtml_function_coverage=1 00:04:03.832 --rc genhtml_legend=1 00:04:03.832 --rc geninfo_all_blocks=1 00:04:03.832 --rc geninfo_unexecuted_blocks=1 00:04:03.832 00:04:03.832 ' 00:04:03.832 12:08:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1403174 00:04:03.832 12:08:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.832 12:08:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1403174 00:04:03.832 12:08:38 rpc -- common/autotest_common.sh@831 -- # '[' -z 1403174 ']' 00:04:03.832 12:08:38 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.832 12:08:38 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:03.832 12:08:38 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.832 12:08:38 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:03.832 12:08:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.832 12:08:38 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:03.832 [2024-11-04 12:08:38.379193] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:04:03.832 [2024-11-04 12:08:38.379263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1403174 ] 00:04:04.092 [2024-11-04 12:08:38.443908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.092 [2024-11-04 12:08:38.486804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:04.092 [2024-11-04 12:08:38.486844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1403174' to capture a snapshot of events at runtime. 00:04:04.092 [2024-11-04 12:08:38.486852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:04.092 [2024-11-04 12:08:38.486859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:04.092 [2024-11-04 12:08:38.486865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1403174 for offline analysis/debug. 00:04:04.092 [2024-11-04 12:08:38.487450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.662 12:08:39 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:04.662 12:08:39 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:04.662 12:08:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:04.662 12:08:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:04.662 12:08:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:04.662 12:08:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:04.663 12:08:39 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.663 12:08:39 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.663 12:08:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.663 ************************************ 00:04:04.663 START TEST rpc_integrity 00:04:04.663 ************************************ 00:04:04.663 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:04.663 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:04.663 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:04.663 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.663 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:04.663 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:04.663 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:04.923 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:04.923 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:04.923 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:04.923 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.923 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:04.923 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:04.923 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:04.923 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:04.923 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.923 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:04.923 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:04.923 { 00:04:04.923 "name": "Malloc0", 00:04:04.923 "aliases": [ 00:04:04.923 "4b02dc55-ba13-46ed-866d-0c4918bc7e2d" 00:04:04.923 ], 00:04:04.923 "product_name": "Malloc disk", 00:04:04.923 "block_size": 512, 00:04:04.923 "num_blocks": 16384, 00:04:04.923 "uuid": "4b02dc55-ba13-46ed-866d-0c4918bc7e2d", 00:04:04.923 "assigned_rate_limits": { 00:04:04.923 "rw_ios_per_sec": 0, 00:04:04.923 "rw_mbytes_per_sec": 0, 00:04:04.923 "r_mbytes_per_sec": 0, 00:04:04.923 "w_mbytes_per_sec": 0 00:04:04.923 }, 00:04:04.923 "claimed": false, 00:04:04.923 "zoned": false, 00:04:04.923 "supported_io_types": { 00:04:04.923 "read": true, 00:04:04.923 "write": true, 00:04:04.923 "unmap": true, 00:04:04.923 "flush": true, 00:04:04.923 "reset": true, 00:04:04.923 "nvme_admin": false, 00:04:04.923 "nvme_io": false, 00:04:04.923 "nvme_io_md": false, 00:04:04.923 "write_zeroes": true, 00:04:04.923 "zcopy": true, 00:04:04.923 "get_zone_info": false, 00:04:04.923 "zone_management": false, 00:04:04.923 "zone_append": false, 00:04:04.923 "compare": false, 00:04:04.923 "compare_and_write": false, 00:04:04.923 "abort": true, 00:04:04.923 "seek_hole": false, 00:04:04.923 "seek_data": false, 00:04:04.923 "copy": true, 00:04:04.923 "nvme_iov_md": false 00:04:04.923 }, 00:04:04.923 "memory_domains": [ 00:04:04.923 { 00:04:04.923 "dma_device_id": "system", 00:04:04.923 "dma_device_type": 1 00:04:04.923 }, 00:04:04.923 { 00:04:04.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.923 "dma_device_type": 2 00:04:04.923 } 00:04:04.923 ], 00:04:04.923 "driver_specific": {} 00:04:04.923 } 00:04:04.923 ]' 00:04:04.923 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:04.923 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:04.923 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:04.923 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:04.923 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.923 [2024-11-04 12:08:39.330288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:04.923 [2024-11-04 12:08:39.330319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:04.923 [2024-11-04 12:08:39.330331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x153de60 00:04:04.923 [2024-11-04 12:08:39.330339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:04.923 [2024-11-04 12:08:39.331690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:04.923 [2024-11-04 12:08:39.331712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:04.923 Passthru0 00:04:04.923 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:04.923 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:04.923 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:04.923 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.923 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:04.923 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:04.923 { 00:04:04.924 "name": "Malloc0", 00:04:04.924 "aliases": [ 00:04:04.924 "4b02dc55-ba13-46ed-866d-0c4918bc7e2d" 00:04:04.924 ], 00:04:04.924 "product_name": "Malloc disk", 00:04:04.924 "block_size": 512, 00:04:04.924 "num_blocks": 16384, 00:04:04.924 "uuid": "4b02dc55-ba13-46ed-866d-0c4918bc7e2d", 00:04:04.924 "assigned_rate_limits": { 00:04:04.924 "rw_ios_per_sec": 0, 00:04:04.924 "rw_mbytes_per_sec": 0, 00:04:04.924 "r_mbytes_per_sec": 0, 00:04:04.924 "w_mbytes_per_sec": 0 00:04:04.924 }, 00:04:04.924 "claimed": true, 00:04:04.924 "claim_type": "exclusive_write", 00:04:04.924 "zoned": false, 00:04:04.924 "supported_io_types": { 00:04:04.924 "read": true, 00:04:04.924 "write": true, 00:04:04.924 "unmap": true, 00:04:04.924 "flush": true, 00:04:04.924 "reset": true, 00:04:04.924 "nvme_admin": false, 00:04:04.924 "nvme_io": false, 00:04:04.924 "nvme_io_md": false, 00:04:04.924 "write_zeroes": true, 00:04:04.924 "zcopy": true, 00:04:04.924 "get_zone_info": false, 00:04:04.924 "zone_management": false, 00:04:04.924 "zone_append": false, 00:04:04.924 "compare": false, 00:04:04.924 "compare_and_write": false, 00:04:04.924 "abort": true, 00:04:04.924 "seek_hole": false, 00:04:04.924 "seek_data": false, 00:04:04.924 "copy": true, 00:04:04.924 "nvme_iov_md": false 00:04:04.924 }, 00:04:04.924 "memory_domains": [ 00:04:04.924 { 00:04:04.924 "dma_device_id": "system", 00:04:04.924 "dma_device_type": 1 00:04:04.924 }, 00:04:04.924 { 00:04:04.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.924 "dma_device_type": 2 00:04:04.924 } 00:04:04.924 ], 00:04:04.924 "driver_specific": {} 00:04:04.924 }, 00:04:04.924 { 00:04:04.924 "name": "Passthru0", 00:04:04.924 "aliases": [ 00:04:04.924 "8ce879d0-14f9-57ef-ac97-fc5a379bbd19" 00:04:04.924 ], 00:04:04.924 "product_name": "passthru", 00:04:04.924 "block_size": 512, 00:04:04.924 "num_blocks": 16384, 00:04:04.924 "uuid": "8ce879d0-14f9-57ef-ac97-fc5a379bbd19", 00:04:04.924 "assigned_rate_limits": { 00:04:04.924 "rw_ios_per_sec": 0, 00:04:04.924 "rw_mbytes_per_sec": 0, 00:04:04.924 "r_mbytes_per_sec": 0, 00:04:04.924 "w_mbytes_per_sec": 0 00:04:04.924 }, 00:04:04.924 "claimed": false, 00:04:04.924 "zoned": false, 00:04:04.924 "supported_io_types": { 00:04:04.924 "read": true, 00:04:04.924 "write": true, 00:04:04.924 "unmap": true, 00:04:04.924 "flush": true, 00:04:04.924 "reset": true, 00:04:04.924 "nvme_admin": false, 00:04:04.924 "nvme_io": false, 00:04:04.924 "nvme_io_md": false, 00:04:04.924 "write_zeroes": true, 00:04:04.924 "zcopy": true, 00:04:04.924 "get_zone_info": false, 00:04:04.924 "zone_management": false, 00:04:04.924 "zone_append": false, 00:04:04.924 "compare": false, 00:04:04.924 "compare_and_write": false, 00:04:04.924 "abort": true, 00:04:04.924 "seek_hole": false, 00:04:04.924 "seek_data": false, 00:04:04.924 "copy": true, 00:04:04.924 "nvme_iov_md": false 00:04:04.924 }, 00:04:04.924 "memory_domains": [ 00:04:04.924 { 00:04:04.924 "dma_device_id": "system", 00:04:04.924 "dma_device_type": 1 00:04:04.924 }, 00:04:04.924 { 00:04:04.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.924 "dma_device_type": 2 00:04:04.924 } 00:04:04.924 ], 00:04:04.924 "driver_specific": { 00:04:04.924 "passthru": { 00:04:04.924 "name": "Passthru0", 00:04:04.924 "base_bdev_name": "Malloc0" 00:04:04.924 } 00:04:04.924 } 00:04:04.924 } 00:04:04.924 ]' 00:04:04.924 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:04.924 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:04.924 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:04.924 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:04.924 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.924 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:04.924 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:04.924 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:04.924 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.924 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:04.924 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:04.924 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:04.924 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.924 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:04.924 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:04.924 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:04.924 12:08:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:04.924 00:04:04.924 real 0m0.286s 00:04:04.924 user 0m0.186s 00:04:04.924 sys 0m0.037s 00:04:04.924 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.924 12:08:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.924 ************************************ 00:04:04.924 END TEST rpc_integrity 00:04:04.924 ************************************ 00:04:05.184 12:08:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:05.184 12:08:39 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.184 12:08:39 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.184 12:08:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.184 ************************************ 00:04:05.184 START TEST rpc_plugins 00:04:05.184 ************************************ 00:04:05.185 12:08:39 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:05.185 12:08:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:05.185 12:08:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:05.185 12:08:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.185 12:08:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:05.185 12:08:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:05.185 12:08:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:05.185 12:08:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:05.185 12:08:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.185 12:08:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:05.185 12:08:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:05.185 { 00:04:05.185 "name": "Malloc1", 00:04:05.185 "aliases": [ 00:04:05.185 "7215c416-4ca5-4cc5-ac56-de5ac47ec712" 00:04:05.185 ], 00:04:05.185 "product_name": "Malloc disk", 00:04:05.185 "block_size": 4096, 00:04:05.185 "num_blocks": 256, 00:04:05.185 "uuid": "7215c416-4ca5-4cc5-ac56-de5ac47ec712", 00:04:05.185 "assigned_rate_limits": { 00:04:05.185 "rw_ios_per_sec": 0, 00:04:05.185 "rw_mbytes_per_sec": 0, 00:04:05.185 "r_mbytes_per_sec": 0, 00:04:05.185 "w_mbytes_per_sec": 0 00:04:05.185 }, 00:04:05.185 "claimed": false, 00:04:05.185 "zoned": false, 00:04:05.185 "supported_io_types": { 00:04:05.185 "read": true, 00:04:05.185 "write": true, 00:04:05.185 "unmap": true, 00:04:05.185 "flush": true, 00:04:05.185 "reset": true, 00:04:05.185 "nvme_admin": false, 00:04:05.185 "nvme_io": false, 00:04:05.185 "nvme_io_md": false, 00:04:05.185 "write_zeroes": true, 00:04:05.185 "zcopy": true, 00:04:05.185 "get_zone_info": false, 00:04:05.185 "zone_management": false, 00:04:05.185 "zone_append": false, 00:04:05.185 "compare": false, 00:04:05.185 "compare_and_write": false, 00:04:05.185 "abort": true, 00:04:05.185 "seek_hole": false, 00:04:05.185 "seek_data": false, 00:04:05.185 "copy": true, 00:04:05.185 "nvme_iov_md": false 00:04:05.185 }, 00:04:05.185 "memory_domains": [ 00:04:05.185 { 00:04:05.185 "dma_device_id": "system", 00:04:05.185 "dma_device_type": 1 00:04:05.185 }, 00:04:05.185 { 00:04:05.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.185 "dma_device_type": 2 00:04:05.185 } 00:04:05.185 ], 00:04:05.185 "driver_specific": {} 00:04:05.185 } 00:04:05.185 ]' 00:04:05.185 12:08:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:05.185 12:08:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:05.185 12:08:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:05.185 12:08:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:05.185 12:08:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.185 12:08:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:05.185 12:08:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:05.185 12:08:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:05.185 12:08:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.185 12:08:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:05.185 12:08:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:05.185 12:08:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:05.185 12:08:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:05.185 00:04:05.185 real 0m0.151s 00:04:05.185 user 0m0.089s 00:04:05.185 sys 0m0.026s 00:04:05.185 12:08:39 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.185 12:08:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.185 ************************************ 00:04:05.185 END TEST rpc_plugins 00:04:05.185 ************************************ 00:04:05.185 12:08:39 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:05.185 12:08:39 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.185 12:08:39 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.185 12:08:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.445 ************************************ 00:04:05.445 START TEST rpc_trace_cmd_test 00:04:05.445 ************************************ 00:04:05.445 12:08:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:05.445 12:08:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:05.445 12:08:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:05.445 12:08:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:05.445 12:08:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:05.445 12:08:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:05.445 12:08:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:05.445 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1403174", 00:04:05.445 "tpoint_group_mask": "0x8", 00:04:05.445 "iscsi_conn": { 00:04:05.445 "mask": "0x2", 00:04:05.445 "tpoint_mask": "0x0" 00:04:05.445 }, 00:04:05.445 "scsi": { 00:04:05.445 "mask": "0x4", 00:04:05.445 "tpoint_mask": "0x0" 00:04:05.445 }, 00:04:05.445 "bdev": { 00:04:05.445 "mask": "0x8", 00:04:05.445 "tpoint_mask": "0xffffffffffffffff" 00:04:05.445 }, 00:04:05.445 "nvmf_rdma": { 00:04:05.445 "mask": "0x10", 00:04:05.445 "tpoint_mask": "0x0" 00:04:05.445 }, 00:04:05.445 "nvmf_tcp": { 00:04:05.445 "mask": "0x20", 00:04:05.445 "tpoint_mask": "0x0" 00:04:05.445 }, 00:04:05.445 "ftl": { 00:04:05.445 "mask": "0x40", 00:04:05.445 "tpoint_mask": "0x0" 00:04:05.445 }, 00:04:05.445 "blobfs": { 00:04:05.445 "mask": "0x80", 00:04:05.445 "tpoint_mask": "0x0" 00:04:05.445 }, 00:04:05.445 "dsa": { 00:04:05.445 "mask": "0x200", 00:04:05.445 "tpoint_mask": "0x0" 00:04:05.445 }, 00:04:05.445 "thread": { 00:04:05.445 "mask": "0x400", 00:04:05.445 "tpoint_mask": "0x0" 00:04:05.445 }, 00:04:05.445 "nvme_pcie": { 00:04:05.445 "mask": "0x800", 00:04:05.445 "tpoint_mask": "0x0" 00:04:05.445 }, 00:04:05.445 "iaa": { 00:04:05.445 "mask": "0x1000", 00:04:05.445 "tpoint_mask": "0x0" 00:04:05.445 }, 00:04:05.445 "nvme_tcp": { 00:04:05.445 "mask": "0x2000", 00:04:05.445 "tpoint_mask": "0x0" 00:04:05.445 }, 00:04:05.445 "bdev_nvme": { 00:04:05.445 "mask": "0x4000", 00:04:05.445 "tpoint_mask": "0x0" 00:04:05.445 }, 00:04:05.445 "sock": { 00:04:05.445 "mask": "0x8000", 00:04:05.445 "tpoint_mask": "0x0" 00:04:05.445 }, 00:04:05.445 "blob": { 00:04:05.445 "mask": "0x10000", 00:04:05.445 "tpoint_mask": "0x0" 00:04:05.445 }, 00:04:05.445 "bdev_raid": { 00:04:05.445 "mask": "0x20000", 00:04:05.445 "tpoint_mask": "0x0" 00:04:05.445 }, 00:04:05.445 "scheduler": { 00:04:05.445 "mask": "0x40000", 00:04:05.445 "tpoint_mask": "0x0" 00:04:05.445 } 00:04:05.445 }' 00:04:05.445 12:08:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:05.445 12:08:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:05.445 12:08:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:05.445 12:08:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:05.445 12:08:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:05.445 12:08:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:05.445 12:08:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:05.445 12:08:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:05.445 12:08:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:05.706 12:08:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:05.706 00:04:05.706 real 0m0.251s 00:04:05.706 user 0m0.211s 00:04:05.706 sys 0m0.029s 00:04:05.706 12:08:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.706 12:08:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:05.706 ************************************ 00:04:05.706 END TEST rpc_trace_cmd_test 00:04:05.706 ************************************ 00:04:05.706 12:08:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:05.706 12:08:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:05.706 12:08:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:05.706 12:08:40 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.706 12:08:40 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.706 12:08:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.706 ************************************ 00:04:05.706 START TEST rpc_daemon_integrity 00:04:05.706 ************************************ 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:05.706 { 00:04:05.706 "name": "Malloc2", 00:04:05.706 "aliases": [ 00:04:05.706 "23d69acc-9c42-49a3-bddb-c3c6857c87c8" 00:04:05.706 ], 00:04:05.706 "product_name": "Malloc disk", 00:04:05.706 "block_size": 512, 00:04:05.706 "num_blocks": 16384, 00:04:05.706 "uuid": "23d69acc-9c42-49a3-bddb-c3c6857c87c8", 00:04:05.706 "assigned_rate_limits": { 00:04:05.706 "rw_ios_per_sec": 0, 00:04:05.706 "rw_mbytes_per_sec": 0, 00:04:05.706 "r_mbytes_per_sec": 0, 00:04:05.706 "w_mbytes_per_sec": 0 00:04:05.706 }, 00:04:05.706 "claimed": false, 00:04:05.706 "zoned": false, 00:04:05.706 "supported_io_types": { 00:04:05.706 "read": true, 00:04:05.706 "write": true, 00:04:05.706 "unmap": true, 00:04:05.706 "flush": true, 00:04:05.706 "reset": true, 00:04:05.706 "nvme_admin": false, 00:04:05.706 "nvme_io": false, 00:04:05.706 "nvme_io_md": false, 00:04:05.706 "write_zeroes": true, 00:04:05.706 "zcopy": true, 00:04:05.706 "get_zone_info": false, 00:04:05.706 "zone_management": false, 00:04:05.706 "zone_append": false, 00:04:05.706 "compare": false, 00:04:05.706 "compare_and_write": false, 00:04:05.706 "abort": true, 00:04:05.706 "seek_hole": false, 00:04:05.706 "seek_data": false, 00:04:05.706 "copy": true, 00:04:05.706 "nvme_iov_md": false 00:04:05.706 }, 00:04:05.706 "memory_domains": [ 00:04:05.706 { 00:04:05.706 "dma_device_id": "system", 00:04:05.706 "dma_device_type": 1 00:04:05.706 }, 00:04:05.706 { 00:04:05.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.706 "dma_device_type": 2 00:04:05.706 } 00:04:05.706 ], 00:04:05.706 "driver_specific": {} 00:04:05.706 } 00:04:05.706 ]' 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.706 [2024-11-04 12:08:40.260806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:05.706 [2024-11-04 12:08:40.260837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:05.706 [2024-11-04 12:08:40.260850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x166f150 00:04:05.706 [2024-11-04 12:08:40.260858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:05.706 [2024-11-04 12:08:40.262179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:05.706 [2024-11-04 12:08:40.262202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:05.706 Passthru0 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:05.706 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.966 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:05.966 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:05.966 { 00:04:05.966 "name": "Malloc2", 00:04:05.966 "aliases": [ 00:04:05.966 "23d69acc-9c42-49a3-bddb-c3c6857c87c8" 00:04:05.966 ], 00:04:05.966 "product_name": "Malloc disk", 00:04:05.966 "block_size": 512, 00:04:05.966 "num_blocks": 16384, 00:04:05.966 "uuid": "23d69acc-9c42-49a3-bddb-c3c6857c87c8", 00:04:05.966 "assigned_rate_limits": { 00:04:05.966 "rw_ios_per_sec": 0, 00:04:05.966 "rw_mbytes_per_sec": 0, 00:04:05.966 "r_mbytes_per_sec": 0, 00:04:05.966 "w_mbytes_per_sec": 0 00:04:05.966 }, 00:04:05.966 "claimed": true, 00:04:05.966 "claim_type": "exclusive_write", 00:04:05.966 "zoned": false, 00:04:05.966 "supported_io_types": { 00:04:05.966 "read": true, 00:04:05.966 "write": true, 00:04:05.966 "unmap": true, 00:04:05.966 "flush": true, 00:04:05.966 "reset": true, 00:04:05.966 "nvme_admin": false, 00:04:05.966 "nvme_io": false, 00:04:05.966 "nvme_io_md": false, 00:04:05.966 "write_zeroes": true, 00:04:05.966 "zcopy": true, 00:04:05.966 "get_zone_info": false, 00:04:05.966 "zone_management": false, 00:04:05.966 "zone_append": false, 00:04:05.966 "compare": false, 00:04:05.966 "compare_and_write": false, 00:04:05.966 "abort": true, 00:04:05.966 "seek_hole": false, 00:04:05.966 "seek_data": false, 00:04:05.966 "copy": true, 00:04:05.966 "nvme_iov_md": false 00:04:05.966 }, 00:04:05.966 "memory_domains": [ 00:04:05.966 { 00:04:05.966 "dma_device_id": "system", 00:04:05.966 "dma_device_type": 1 00:04:05.966 }, 00:04:05.966 { 00:04:05.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.966 "dma_device_type": 2 00:04:05.966 } 00:04:05.966 ], 00:04:05.966 "driver_specific": {} 00:04:05.966 }, 00:04:05.966 { 00:04:05.966 "name": "Passthru0", 00:04:05.966 "aliases": [ 00:04:05.966 "8f3c5924-6c8e-5951-a082-0e3fb69caa9a" 00:04:05.966 ], 00:04:05.966 "product_name": "passthru", 00:04:05.966 "block_size": 512, 00:04:05.966 "num_blocks": 16384, 00:04:05.966 "uuid": "8f3c5924-6c8e-5951-a082-0e3fb69caa9a", 00:04:05.966 "assigned_rate_limits": { 00:04:05.966 "rw_ios_per_sec": 0, 00:04:05.966 "rw_mbytes_per_sec": 0, 00:04:05.966 "r_mbytes_per_sec": 0, 00:04:05.966 "w_mbytes_per_sec": 0 00:04:05.966 }, 00:04:05.966 "claimed": false, 00:04:05.966 "zoned": false, 00:04:05.966 "supported_io_types": { 00:04:05.966 "read": true, 00:04:05.966 "write": true, 00:04:05.966 "unmap": true, 00:04:05.966 "flush": true, 00:04:05.966 "reset": true, 00:04:05.966 "nvme_admin": false, 00:04:05.966 "nvme_io": false, 00:04:05.966 "nvme_io_md": false, 00:04:05.966 "write_zeroes": true, 00:04:05.966 "zcopy": true, 00:04:05.966 "get_zone_info": false, 00:04:05.966 "zone_management": false, 00:04:05.966 "zone_append": false, 00:04:05.966 "compare": false, 00:04:05.966 "compare_and_write": false, 00:04:05.966 "abort": true, 00:04:05.966 "seek_hole": false, 00:04:05.966 "seek_data": false, 00:04:05.966 "copy": true, 00:04:05.966 "nvme_iov_md": false 00:04:05.966 }, 00:04:05.967 "memory_domains": [ 00:04:05.967 { 00:04:05.967 "dma_device_id": "system", 00:04:05.967 "dma_device_type": 1 00:04:05.967 }, 00:04:05.967 { 00:04:05.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.967 "dma_device_type": 2 00:04:05.967 } 00:04:05.967 ], 00:04:05.967 "driver_specific": { 00:04:05.967 "passthru": { 00:04:05.967 "name": "Passthru0", 00:04:05.967 "base_bdev_name": "Malloc2" 00:04:05.967 } 00:04:05.967 } 00:04:05.967 } 00:04:05.967 ]' 00:04:05.967 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:05.967 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:05.967 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:05.967 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:05.967 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.967 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:05.967 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:05.967 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:05.967 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.967 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:05.967 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:05.967 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:05.967 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.967 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:05.967 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:05.967 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:05.967 12:08:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:05.967 00:04:05.967 real 0m0.301s 00:04:05.967 user 0m0.190s 00:04:05.967 sys 0m0.042s 00:04:05.967 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.967 12:08:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.967 ************************************ 00:04:05.967 END TEST rpc_daemon_integrity 00:04:05.967 ************************************ 00:04:05.967 12:08:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:05.967 12:08:40 rpc -- rpc/rpc.sh@84 -- # killprocess 1403174 00:04:05.967 12:08:40 rpc -- common/autotest_common.sh@950 -- # '[' -z 1403174 ']' 00:04:05.967 12:08:40 rpc -- common/autotest_common.sh@954 -- # kill -0 1403174 00:04:05.967 12:08:40 rpc -- common/autotest_common.sh@955 -- # uname 00:04:05.967 12:08:40 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:05.967 12:08:40 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1403174 00:04:05.967 12:08:40 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:05.967 12:08:40 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:05.967 12:08:40 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1403174' 00:04:05.967 killing process with pid 1403174 00:04:05.967 12:08:40 rpc -- common/autotest_common.sh@969 -- # kill 1403174 00:04:05.967 12:08:40 rpc -- common/autotest_common.sh@974 -- # wait 1403174 00:04:06.227 00:04:06.227 real 0m2.601s 00:04:06.227 user 0m3.362s 00:04:06.227 sys 0m0.764s 00:04:06.227 12:08:40 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.227 12:08:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.227 ************************************ 00:04:06.227 END TEST rpc 00:04:06.227 ************************************ 00:04:06.227 12:08:40 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:06.227 12:08:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.227 12:08:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.227 12:08:40 -- common/autotest_common.sh@10 -- # set +x 00:04:06.487 ************************************ 00:04:06.487 START TEST skip_rpc 00:04:06.487 ************************************ 00:04:06.487 12:08:40 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:06.487 * Looking for test storage... 00:04:06.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.487 12:08:40 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:06.487 12:08:40 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:06.487 12:08:40 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:06.487 12:08:40 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.487 12:08:40 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:06.487 12:08:41 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:06.487 12:08:41 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.487 12:08:41 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:06.487 12:08:41 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.487 12:08:41 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.487 12:08:41 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.487 12:08:41 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:06.487 12:08:41 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.487 12:08:41 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:06.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.487 --rc genhtml_branch_coverage=1 00:04:06.487 --rc genhtml_function_coverage=1 00:04:06.487 --rc genhtml_legend=1 00:04:06.487 --rc geninfo_all_blocks=1 00:04:06.487 --rc geninfo_unexecuted_blocks=1 00:04:06.487 00:04:06.487 ' 00:04:06.487 12:08:41 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:06.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.487 --rc genhtml_branch_coverage=1 00:04:06.487 --rc genhtml_function_coverage=1 00:04:06.487 --rc genhtml_legend=1 00:04:06.487 --rc geninfo_all_blocks=1 00:04:06.487 --rc geninfo_unexecuted_blocks=1 00:04:06.487 00:04:06.487 ' 00:04:06.487 12:08:41 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:06.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.487 --rc genhtml_branch_coverage=1 00:04:06.487 --rc genhtml_function_coverage=1 00:04:06.487 --rc genhtml_legend=1 00:04:06.487 --rc geninfo_all_blocks=1 00:04:06.487 --rc geninfo_unexecuted_blocks=1 00:04:06.487 00:04:06.487 ' 00:04:06.487 12:08:41 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:06.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.487 --rc genhtml_branch_coverage=1 00:04:06.487 --rc genhtml_function_coverage=1 00:04:06.487 --rc genhtml_legend=1 00:04:06.487 --rc geninfo_all_blocks=1 00:04:06.487 --rc geninfo_unexecuted_blocks=1 00:04:06.487 00:04:06.487 ' 00:04:06.487 12:08:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:06.487 12:08:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:06.487 12:08:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:06.487 12:08:41 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.487 12:08:41 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.487 12:08:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.488 ************************************ 00:04:06.488 START TEST skip_rpc 00:04:06.488 ************************************ 00:04:06.488 12:08:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:06.488 12:08:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1403838 00:04:06.488 12:08:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.488 12:08:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:06.488 12:08:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:06.748 [2024-11-04 12:08:41.113647] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:04:06.748 [2024-11-04 12:08:41.113698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1403838 ] 00:04:06.748 [2024-11-04 12:08:41.174599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.748 [2024-11-04 12:08:41.211812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1403838 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1403838 ']' 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1403838 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1403838 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1403838' 00:04:12.077 killing process with pid 1403838 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1403838 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1403838 00:04:12.077 00:04:12.077 real 0m5.284s 00:04:12.077 user 0m5.094s 00:04:12.077 sys 0m0.236s 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.077 12:08:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.077 ************************************ 00:04:12.077 END TEST skip_rpc 00:04:12.077 ************************************ 00:04:12.077 12:08:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:12.077 12:08:46 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.077 12:08:46 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.077 12:08:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.077 ************************************ 00:04:12.077 START TEST skip_rpc_with_json 00:04:12.077 ************************************ 00:04:12.077 12:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:12.077 12:08:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:12.077 12:08:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1405062 00:04:12.077 12:08:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.077 12:08:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:12.077 12:08:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1405062 00:04:12.077 12:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1405062 ']' 00:04:12.077 12:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.077 12:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:12.077 12:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.077 12:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:12.077 12:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.077 [2024-11-04 12:08:46.466926] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:04:12.077 [2024-11-04 12:08:46.466981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405062 ] 00:04:12.077 [2024-11-04 12:08:46.528543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.077 [2024-11-04 12:08:46.568905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.020 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:13.020 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:13.020 12:08:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:13.020 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.020 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.020 [2024-11-04 12:08:47.272397] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:13.020 request: 00:04:13.020 { 00:04:13.020 "trtype": "tcp", 00:04:13.020 "method": "nvmf_get_transports", 00:04:13.020 "req_id": 1 00:04:13.020 } 00:04:13.020 Got JSON-RPC error response 00:04:13.020 response: 00:04:13.020 { 00:04:13.020 "code": -19, 00:04:13.020 "message": "No such device" 00:04:13.020 } 00:04:13.020 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:13.020 12:08:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:13.020 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.020 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.020 [2024-11-04 12:08:47.284522] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:13.020 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.020 12:08:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:13.020 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.020 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.020 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.020 12:08:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:13.020 { 00:04:13.020 "subsystems": [ 00:04:13.020 { 00:04:13.020 "subsystem": "fsdev", 00:04:13.020 "config": [ 00:04:13.020 { 00:04:13.020 "method": "fsdev_set_opts", 00:04:13.020 "params": { 00:04:13.020 "fsdev_io_pool_size": 65535, 00:04:13.020 "fsdev_io_cache_size": 256 00:04:13.020 } 00:04:13.020 } 00:04:13.020 ] 00:04:13.020 }, 00:04:13.020 { 00:04:13.020 "subsystem": "vfio_user_target", 00:04:13.020 "config": null 00:04:13.020 }, 00:04:13.020 { 00:04:13.020 "subsystem": "keyring", 00:04:13.020 "config": [] 00:04:13.020 }, 00:04:13.020 { 00:04:13.020 "subsystem": "iobuf", 00:04:13.020 "config": [ 00:04:13.020 { 00:04:13.020 "method": "iobuf_set_options", 00:04:13.020 "params": { 00:04:13.020 "small_pool_count": 8192, 00:04:13.020 "large_pool_count": 1024, 00:04:13.020 "small_bufsize": 8192, 00:04:13.020 "large_bufsize": 135168 00:04:13.020 } 00:04:13.020 } 00:04:13.020 ] 00:04:13.020 }, 00:04:13.020 { 00:04:13.020 "subsystem": "sock", 00:04:13.020 "config": [ 00:04:13.020 { 00:04:13.020 "method": "sock_set_default_impl", 00:04:13.020 "params": { 00:04:13.020 "impl_name": "posix" 00:04:13.020 } 00:04:13.020 }, 00:04:13.020 { 00:04:13.020 "method": "sock_impl_set_options", 00:04:13.020 "params": { 00:04:13.020 "impl_name": "ssl", 00:04:13.020 "recv_buf_size": 4096, 00:04:13.020 "send_buf_size": 4096, 00:04:13.020 "enable_recv_pipe": true, 00:04:13.020 "enable_quickack": false, 00:04:13.020 "enable_placement_id": 0, 00:04:13.020 "enable_zerocopy_send_server": true, 00:04:13.020 "enable_zerocopy_send_client": false, 00:04:13.020 "zerocopy_threshold": 0, 00:04:13.020 "tls_version": 0, 00:04:13.020 "enable_ktls": false 00:04:13.020 } 00:04:13.020 }, 00:04:13.020 { 00:04:13.020 "method": "sock_impl_set_options", 00:04:13.020 "params": { 00:04:13.020 "impl_name": "posix", 00:04:13.020 "recv_buf_size": 2097152, 00:04:13.020 "send_buf_size": 2097152, 00:04:13.020 "enable_recv_pipe": true, 00:04:13.020 "enable_quickack": false, 00:04:13.020 "enable_placement_id": 0, 00:04:13.020 "enable_zerocopy_send_server": true, 00:04:13.020 "enable_zerocopy_send_client": false, 00:04:13.020 "zerocopy_threshold": 0, 00:04:13.020 "tls_version": 0, 00:04:13.020 "enable_ktls": false 00:04:13.020 } 00:04:13.020 } 00:04:13.020 ] 00:04:13.020 }, 00:04:13.020 { 00:04:13.020 "subsystem": "vmd", 00:04:13.020 "config": [] 00:04:13.020 }, 00:04:13.020 { 00:04:13.020 "subsystem": "accel", 00:04:13.020 "config": [ 00:04:13.020 { 00:04:13.020 "method": "accel_set_options", 00:04:13.020 "params": { 00:04:13.020 "small_cache_size": 128, 00:04:13.020 "large_cache_size": 16, 00:04:13.020 "task_count": 2048, 00:04:13.020 "sequence_count": 2048, 00:04:13.020 "buf_count": 2048 00:04:13.020 } 00:04:13.020 } 00:04:13.020 ] 00:04:13.020 }, 00:04:13.020 { 00:04:13.020 "subsystem": "bdev", 00:04:13.020 "config": [ 00:04:13.020 { 00:04:13.020 "method": "bdev_set_options", 00:04:13.020 "params": { 00:04:13.020 "bdev_io_pool_size": 65535, 00:04:13.020 "bdev_io_cache_size": 256, 00:04:13.020 "bdev_auto_examine": true, 00:04:13.020 "iobuf_small_cache_size": 128, 00:04:13.021 "iobuf_large_cache_size": 16 00:04:13.021 } 00:04:13.021 }, 00:04:13.021 { 00:04:13.021 "method": "bdev_raid_set_options", 00:04:13.021 "params": { 00:04:13.021 "process_window_size_kb": 1024, 00:04:13.021 "process_max_bandwidth_mb_sec": 0 00:04:13.021 } 00:04:13.021 }, 00:04:13.021 { 00:04:13.021 "method": "bdev_iscsi_set_options", 00:04:13.021 "params": { 00:04:13.021 "timeout_sec": 30 00:04:13.021 } 00:04:13.021 }, 00:04:13.021 { 00:04:13.021 "method": "bdev_nvme_set_options", 00:04:13.021 "params": { 00:04:13.021 "action_on_timeout": "none", 00:04:13.021 "timeout_us": 0, 00:04:13.021 "timeout_admin_us": 0, 00:04:13.021 "keep_alive_timeout_ms": 10000, 00:04:13.021 "arbitration_burst": 0, 00:04:13.021 "low_priority_weight": 0, 00:04:13.021 "medium_priority_weight": 0, 00:04:13.021 "high_priority_weight": 0, 00:04:13.021 "nvme_adminq_poll_period_us": 10000, 00:04:13.021 "nvme_ioq_poll_period_us": 0, 00:04:13.021 "io_queue_requests": 0, 00:04:13.021 "delay_cmd_submit": true, 00:04:13.021 "transport_retry_count": 4, 00:04:13.021 "bdev_retry_count": 3, 00:04:13.021 "transport_ack_timeout": 0, 00:04:13.021 "ctrlr_loss_timeout_sec": 0, 00:04:13.021 "reconnect_delay_sec": 0, 00:04:13.021 "fast_io_fail_timeout_sec": 0, 00:04:13.021 "disable_auto_failback": false, 00:04:13.021 "generate_uuids": false, 00:04:13.021 "transport_tos": 0, 00:04:13.021 "nvme_error_stat": false, 00:04:13.021 "rdma_srq_size": 0, 00:04:13.021 "io_path_stat": false, 00:04:13.021 "allow_accel_sequence": false, 00:04:13.021 "rdma_max_cq_size": 0, 00:04:13.021 "rdma_cm_event_timeout_ms": 0, 00:04:13.021 "dhchap_digests": [ 00:04:13.021 "sha256", 00:04:13.021 "sha384", 00:04:13.021 "sha512" 00:04:13.021 ], 00:04:13.021 "dhchap_dhgroups": [ 00:04:13.021 "null", 00:04:13.021 "ffdhe2048", 00:04:13.021 "ffdhe3072", 00:04:13.021 "ffdhe4096", 00:04:13.021 "ffdhe6144", 00:04:13.021 "ffdhe8192" 00:04:13.021 ] 00:04:13.021 } 00:04:13.021 }, 00:04:13.021 { 00:04:13.021 "method": "bdev_nvme_set_hotplug", 00:04:13.021 "params": { 00:04:13.021 "period_us": 100000, 00:04:13.021 "enable": false 00:04:13.021 } 00:04:13.021 }, 00:04:13.021 { 00:04:13.021 "method": "bdev_wait_for_examine" 00:04:13.021 } 00:04:13.021 ] 00:04:13.021 }, 00:04:13.021 { 00:04:13.021 "subsystem": "scsi", 00:04:13.021 "config": null 00:04:13.021 }, 00:04:13.021 { 00:04:13.021 "subsystem": "scheduler", 00:04:13.021 "config": [ 00:04:13.021 { 00:04:13.021 "method": "framework_set_scheduler", 00:04:13.021 "params": { 00:04:13.021 "name": "static" 00:04:13.021 } 00:04:13.021 } 00:04:13.021 ] 00:04:13.021 }, 00:04:13.021 { 00:04:13.021 "subsystem": "vhost_scsi", 00:04:13.021 "config": [] 00:04:13.021 }, 00:04:13.021 { 00:04:13.021 "subsystem": "vhost_blk", 00:04:13.021 "config": [] 00:04:13.021 }, 00:04:13.021 { 00:04:13.021 "subsystem": "ublk", 00:04:13.021 "config": [] 00:04:13.021 }, 00:04:13.021 { 00:04:13.021 "subsystem": "nbd", 00:04:13.021 "config": [] 00:04:13.021 }, 00:04:13.021 { 00:04:13.021 "subsystem": "nvmf", 00:04:13.021 "config": [ 00:04:13.021 { 00:04:13.021 "method": "nvmf_set_config", 00:04:13.021 "params": { 00:04:13.021 "discovery_filter": "match_any", 00:04:13.021 "admin_cmd_passthru": { 00:04:13.021 "identify_ctrlr": false 00:04:13.021 }, 00:04:13.021 "dhchap_digests": [ 00:04:13.021 "sha256", 00:04:13.021 "sha384", 00:04:13.021 "sha512" 00:04:13.021 ], 00:04:13.021 "dhchap_dhgroups": [ 00:04:13.021 "null", 00:04:13.021 "ffdhe2048", 00:04:13.021 "ffdhe3072", 00:04:13.021 "ffdhe4096", 00:04:13.021 "ffdhe6144", 00:04:13.021 "ffdhe8192" 00:04:13.021 ] 00:04:13.021 } 00:04:13.021 }, 00:04:13.021 { 00:04:13.021 "method": "nvmf_set_max_subsystems", 00:04:13.021 "params": { 00:04:13.021 "max_subsystems": 1024 00:04:13.021 } 00:04:13.021 }, 00:04:13.021 { 00:04:13.021 "method": "nvmf_set_crdt", 00:04:13.021 "params": { 00:04:13.021 "crdt1": 0, 00:04:13.021 "crdt2": 0, 00:04:13.021 "crdt3": 0 00:04:13.021 } 00:04:13.021 }, 00:04:13.021 { 00:04:13.021 "method": "nvmf_create_transport", 00:04:13.021 "params": { 00:04:13.021 "trtype": "TCP", 00:04:13.021 "max_queue_depth": 128, 00:04:13.021 "max_io_qpairs_per_ctrlr": 127, 00:04:13.021 "in_capsule_data_size": 4096, 00:04:13.021 "max_io_size": 131072, 00:04:13.021 "io_unit_size": 131072, 00:04:13.021 "max_aq_depth": 128, 00:04:13.021 "num_shared_buffers": 511, 00:04:13.021 "buf_cache_size": 4294967295, 00:04:13.021 "dif_insert_or_strip": false, 00:04:13.021 "zcopy": false, 00:04:13.021 "c2h_success": true, 00:04:13.021 "sock_priority": 0, 00:04:13.021 "abort_timeout_sec": 1, 00:04:13.021 "ack_timeout": 0, 00:04:13.021 "data_wr_pool_size": 0 00:04:13.021 } 00:04:13.021 } 00:04:13.021 ] 00:04:13.021 }, 00:04:13.021 { 00:04:13.021 "subsystem": "iscsi", 00:04:13.021 "config": [ 00:04:13.021 { 00:04:13.021 "method": "iscsi_set_options", 00:04:13.021 "params": { 00:04:13.021 "node_base": "iqn.2016-06.io.spdk", 00:04:13.021 "max_sessions": 128, 00:04:13.021 "max_connections_per_session": 2, 00:04:13.021 "max_queue_depth": 64, 00:04:13.021 "default_time2wait": 2, 00:04:13.021 "default_time2retain": 20, 00:04:13.021 "first_burst_length": 8192, 00:04:13.021 "immediate_data": true, 00:04:13.021 "allow_duplicated_isid": false, 00:04:13.021 "error_recovery_level": 0, 00:04:13.021 "nop_timeout": 60, 00:04:13.021 "nop_in_interval": 30, 00:04:13.021 "disable_chap": false, 00:04:13.021 "require_chap": false, 00:04:13.021 "mutual_chap": false, 00:04:13.021 "chap_group": 0, 00:04:13.021 "max_large_datain_per_connection": 64, 00:04:13.021 "max_r2t_per_connection": 4, 00:04:13.021 "pdu_pool_size": 36864, 00:04:13.021 "immediate_data_pool_size": 16384, 00:04:13.021 "data_out_pool_size": 2048 00:04:13.021 } 00:04:13.021 } 00:04:13.021 ] 00:04:13.021 } 00:04:13.021 ] 00:04:13.021 } 00:04:13.021 12:08:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:13.021 12:08:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1405062 00:04:13.021 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1405062 ']' 00:04:13.021 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1405062 00:04:13.021 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:13.021 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:13.021 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1405062 00:04:13.021 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:13.021 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:13.021 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1405062' 00:04:13.021 killing process with pid 1405062 00:04:13.021 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1405062 00:04:13.021 12:08:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1405062 00:04:13.283 12:08:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1405231 00:04:13.283 12:08:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:13.283 12:08:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:18.653 12:08:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1405231 00:04:18.653 12:08:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1405231 ']' 00:04:18.653 12:08:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1405231 00:04:18.653 12:08:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:18.653 12:08:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:18.653 12:08:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1405231 00:04:18.653 12:08:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:18.653 12:08:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:18.653 12:08:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1405231' 00:04:18.653 killing process with pid 1405231 00:04:18.653 12:08:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1405231 00:04:18.653 12:08:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1405231 00:04:18.653 12:08:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:18.653 12:08:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:18.653 00:04:18.653 real 0m6.603s 00:04:18.653 user 0m6.522s 00:04:18.653 sys 0m0.559s 00:04:18.653 12:08:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.653 12:08:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.653 ************************************ 00:04:18.653 END TEST skip_rpc_with_json 00:04:18.653 ************************************ 00:04:18.653 12:08:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:18.653 12:08:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.653 12:08:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.653 12:08:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.653 ************************************ 00:04:18.653 START TEST skip_rpc_with_delay 00:04:18.653 ************************************ 00:04:18.653 12:08:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:18.653 12:08:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:18.653 12:08:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:18.653 12:08:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:18.653 12:08:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.653 12:08:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:18.653 12:08:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.653 12:08:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:18.653 12:08:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.653 12:08:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:18.653 12:08:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.654 12:08:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:18.654 12:08:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:18.654 [2024-11-04 12:08:53.152638] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:18.654 12:08:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:18.654 12:08:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:18.654 12:08:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:18.654 12:08:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:18.654 00:04:18.654 real 0m0.075s 00:04:18.654 user 0m0.049s 00:04:18.654 sys 0m0.026s 00:04:18.654 12:08:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.654 12:08:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:18.654 ************************************ 00:04:18.654 END TEST skip_rpc_with_delay 00:04:18.654 ************************************ 00:04:18.654 12:08:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:18.654 12:08:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:18.654 12:08:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:18.654 12:08:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.654 12:08:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.654 12:08:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.915 ************************************ 00:04:18.915 START TEST exit_on_failed_rpc_init 00:04:18.915 ************************************ 00:04:18.915 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:18.915 12:08:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1406476 00:04:18.915 12:08:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1406476 00:04:18.915 12:08:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:18.915 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1406476 ']' 00:04:18.915 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.915 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:18.915 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.915 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:18.915 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:18.915 [2024-11-04 12:08:53.306836] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:04:18.915 [2024-11-04 12:08:53.306886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1406476 ] 00:04:18.915 [2024-11-04 12:08:53.369661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.915 [2024-11-04 12:08:53.405966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.177 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:19.177 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:19.177 12:08:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.177 12:08:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:19.177 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:19.177 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:19.177 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.177 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:19.177 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.177 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:19.177 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.177 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:19.177 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.177 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:19.177 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:19.177 [2024-11-04 12:08:53.653103] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:04:19.177 [2024-11-04 12:08:53.653154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1406485 ] 00:04:19.177 [2024-11-04 12:08:53.729356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.437 [2024-11-04 12:08:53.765058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.437 [2024-11-04 12:08:53.765107] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:19.437 [2024-11-04 12:08:53.765117] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:19.437 [2024-11-04 12:08:53.765124] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:19.437 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:19.437 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:19.437 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:19.437 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:19.437 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:19.437 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:19.437 12:08:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:19.437 12:08:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1406476 00:04:19.437 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1406476 ']' 00:04:19.437 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1406476 00:04:19.437 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:19.437 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:19.437 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1406476 00:04:19.437 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:19.437 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:19.437 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1406476' 00:04:19.437 killing process with pid 1406476 00:04:19.437 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1406476 00:04:19.437 12:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1406476 00:04:19.698 00:04:19.698 real 0m0.836s 00:04:19.698 user 0m0.995s 00:04:19.698 sys 0m0.348s 00:04:19.698 12:08:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.698 12:08:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:19.698 ************************************ 00:04:19.698 END TEST exit_on_failed_rpc_init 00:04:19.698 ************************************ 00:04:19.698 12:08:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:19.698 00:04:19.698 real 0m13.307s 00:04:19.698 user 0m12.889s 00:04:19.698 sys 0m1.477s 00:04:19.698 12:08:54 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.698 12:08:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.698 ************************************ 00:04:19.698 END TEST skip_rpc 00:04:19.698 ************************************ 00:04:19.698 12:08:54 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:19.698 12:08:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.698 12:08:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.698 12:08:54 -- common/autotest_common.sh@10 -- # set +x 00:04:19.698 ************************************ 00:04:19.698 START TEST rpc_client 00:04:19.698 ************************************ 00:04:19.698 12:08:54 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:19.960 * Looking for test storage... 00:04:19.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:19.960 12:08:54 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:19.960 12:08:54 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:19.960 12:08:54 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:19.960 12:08:54 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.960 12:08:54 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:19.960 12:08:54 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.960 12:08:54 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:19.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.960 --rc genhtml_branch_coverage=1 00:04:19.960 --rc genhtml_function_coverage=1 00:04:19.960 --rc genhtml_legend=1 00:04:19.960 --rc geninfo_all_blocks=1 00:04:19.960 --rc geninfo_unexecuted_blocks=1 00:04:19.960 00:04:19.960 ' 00:04:19.960 12:08:54 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:19.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.960 --rc genhtml_branch_coverage=1 00:04:19.960 --rc genhtml_function_coverage=1 00:04:19.960 --rc genhtml_legend=1 00:04:19.960 --rc geninfo_all_blocks=1 00:04:19.960 --rc geninfo_unexecuted_blocks=1 00:04:19.960 00:04:19.960 ' 00:04:19.960 12:08:54 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:19.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.960 --rc genhtml_branch_coverage=1 00:04:19.960 --rc genhtml_function_coverage=1 00:04:19.960 --rc genhtml_legend=1 00:04:19.960 --rc geninfo_all_blocks=1 00:04:19.960 --rc geninfo_unexecuted_blocks=1 00:04:19.960 00:04:19.960 ' 00:04:19.960 12:08:54 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:19.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.960 --rc genhtml_branch_coverage=1 00:04:19.960 --rc genhtml_function_coverage=1 00:04:19.960 --rc genhtml_legend=1 00:04:19.960 --rc geninfo_all_blocks=1 00:04:19.960 --rc geninfo_unexecuted_blocks=1 00:04:19.960 00:04:19.960 ' 00:04:19.960 12:08:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:19.960 OK 00:04:19.960 12:08:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:19.960 00:04:19.960 real 0m0.232s 00:04:19.960 user 0m0.129s 00:04:19.960 sys 0m0.117s 00:04:19.960 12:08:54 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.960 12:08:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:19.960 ************************************ 00:04:19.960 END TEST rpc_client 00:04:19.960 ************************************ 00:04:19.960 12:08:54 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:19.960 12:08:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.960 12:08:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.960 12:08:54 -- common/autotest_common.sh@10 -- # set +x 00:04:19.960 ************************************ 00:04:19.960 START TEST json_config 00:04:19.960 ************************************ 00:04:19.960 12:08:54 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:20.223 12:08:54 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:20.223 12:08:54 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:20.223 12:08:54 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:20.223 12:08:54 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:20.223 12:08:54 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.223 12:08:54 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.223 12:08:54 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.223 12:08:54 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.223 12:08:54 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.223 12:08:54 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.223 12:08:54 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.223 12:08:54 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.223 12:08:54 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.223 12:08:54 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.223 12:08:54 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.223 12:08:54 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:20.223 12:08:54 json_config -- scripts/common.sh@345 -- # : 1 00:04:20.223 12:08:54 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.223 12:08:54 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.223 12:08:54 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:20.223 12:08:54 json_config -- scripts/common.sh@353 -- # local d=1 00:04:20.223 12:08:54 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.223 12:08:54 json_config -- scripts/common.sh@355 -- # echo 1 00:04:20.223 12:08:54 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.223 12:08:54 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:20.223 12:08:54 json_config -- scripts/common.sh@353 -- # local d=2 00:04:20.223 12:08:54 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.223 12:08:54 json_config -- scripts/common.sh@355 -- # echo 2 00:04:20.223 12:08:54 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.223 12:08:54 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.223 12:08:54 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.223 12:08:54 json_config -- scripts/common.sh@368 -- # return 0 00:04:20.223 12:08:54 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.223 12:08:54 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:20.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.223 --rc genhtml_branch_coverage=1 00:04:20.223 --rc genhtml_function_coverage=1 00:04:20.223 --rc genhtml_legend=1 00:04:20.223 --rc geninfo_all_blocks=1 00:04:20.223 --rc geninfo_unexecuted_blocks=1 00:04:20.223 00:04:20.223 ' 00:04:20.223 12:08:54 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:20.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.223 --rc genhtml_branch_coverage=1 00:04:20.223 --rc genhtml_function_coverage=1 00:04:20.223 --rc genhtml_legend=1 00:04:20.223 --rc geninfo_all_blocks=1 00:04:20.223 --rc geninfo_unexecuted_blocks=1 00:04:20.223 00:04:20.223 ' 00:04:20.223 12:08:54 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:20.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.223 --rc genhtml_branch_coverage=1 00:04:20.223 --rc genhtml_function_coverage=1 00:04:20.223 --rc genhtml_legend=1 00:04:20.223 --rc geninfo_all_blocks=1 00:04:20.223 --rc geninfo_unexecuted_blocks=1 00:04:20.223 00:04:20.223 ' 00:04:20.223 12:08:54 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:20.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.223 --rc genhtml_branch_coverage=1 00:04:20.223 --rc genhtml_function_coverage=1 00:04:20.223 --rc genhtml_legend=1 00:04:20.223 --rc geninfo_all_blocks=1 00:04:20.223 --rc geninfo_unexecuted_blocks=1 00:04:20.223 00:04:20.223 ' 00:04:20.223 12:08:54 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:20.223 12:08:54 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:20.223 12:08:54 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:20.223 12:08:54 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:20.223 12:08:54 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:20.223 12:08:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.223 12:08:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.223 12:08:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.223 12:08:54 json_config -- paths/export.sh@5 -- # export PATH 00:04:20.223 12:08:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@51 -- # : 0 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:20.223 12:08:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:20.224 12:08:54 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:20.224 12:08:54 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:20.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:20.224 12:08:54 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:20.224 12:08:54 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:20.224 12:08:54 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:20.224 INFO: JSON configuration test init 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:20.224 12:08:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:20.224 12:08:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:20.224 12:08:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:20.224 12:08:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.224 12:08:54 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:20.224 12:08:54 json_config -- json_config/common.sh@9 -- # local app=target 00:04:20.224 12:08:54 json_config -- json_config/common.sh@10 -- # shift 00:04:20.224 12:08:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:20.224 12:08:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:20.224 12:08:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:20.224 12:08:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.224 12:08:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.224 12:08:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1406923 00:04:20.224 12:08:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:20.224 Waiting for target to run... 00:04:20.224 12:08:54 json_config -- json_config/common.sh@25 -- # waitforlisten 1406923 /var/tmp/spdk_tgt.sock 00:04:20.224 12:08:54 json_config -- common/autotest_common.sh@831 -- # '[' -z 1406923 ']' 00:04:20.224 12:08:54 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:20.224 12:08:54 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:20.224 12:08:54 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:20.224 12:08:54 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:20.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:20.224 12:08:54 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:20.224 12:08:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.484 [2024-11-04 12:08:54.793815] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:04:20.484 [2024-11-04 12:08:54.793870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1406923 ] 00:04:20.744 [2024-11-04 12:08:55.088101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.744 [2024-11-04 12:08:55.117706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.321 12:08:55 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:21.321 12:08:55 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:21.321 12:08:55 json_config -- json_config/common.sh@26 -- # echo '' 00:04:21.321 00:04:21.321 12:08:55 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:21.321 12:08:55 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:21.321 12:08:55 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:21.321 12:08:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.321 12:08:55 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:21.321 12:08:55 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:21.321 12:08:55 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:21.321 12:08:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.321 12:08:55 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:21.321 12:08:55 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:21.321 12:08:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:21.890 12:08:56 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:21.890 12:08:56 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:21.890 12:08:56 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:21.890 12:08:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.890 12:08:56 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:21.890 12:08:56 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:21.890 12:08:56 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:21.890 12:08:56 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:21.891 12:08:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@54 -- # sort 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:21.891 12:08:56 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:21.891 12:08:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:21.891 12:08:56 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:21.891 12:08:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:21.891 12:08:56 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:21.891 12:08:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:22.151 MallocForNvmf0 00:04:22.151 12:08:56 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:22.151 12:08:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:22.411 MallocForNvmf1 00:04:22.411 12:08:56 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:22.411 12:08:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:22.411 [2024-11-04 12:08:56.970592] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:22.672 12:08:56 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:22.672 12:08:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:22.672 12:08:57 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:22.672 12:08:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:22.931 12:08:57 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:22.931 12:08:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:23.192 12:08:57 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:23.192 12:08:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:23.192 [2024-11-04 12:08:57.664812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:23.192 12:08:57 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:23.192 12:08:57 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:23.192 12:08:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.192 12:08:57 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:23.192 12:08:57 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:23.192 12:08:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.453 12:08:57 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:23.453 12:08:57 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:23.453 12:08:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:23.453 MallocBdevForConfigChangeCheck 00:04:23.453 12:08:57 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:23.453 12:08:57 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:23.453 12:08:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.453 12:08:57 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:23.453 12:08:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:24.025 12:08:58 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:24.025 INFO: shutting down applications... 00:04:24.025 12:08:58 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:24.025 12:08:58 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:24.025 12:08:58 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:24.025 12:08:58 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:24.285 Calling clear_iscsi_subsystem 00:04:24.285 Calling clear_nvmf_subsystem 00:04:24.285 Calling clear_nbd_subsystem 00:04:24.285 Calling clear_ublk_subsystem 00:04:24.285 Calling clear_vhost_blk_subsystem 00:04:24.285 Calling clear_vhost_scsi_subsystem 00:04:24.285 Calling clear_bdev_subsystem 00:04:24.285 12:08:58 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:24.285 12:08:58 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:24.285 12:08:58 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:24.285 12:08:58 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:24.285 12:08:58 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:24.285 12:08:58 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:24.859 12:08:59 json_config -- json_config/json_config.sh@352 -- # break 00:04:24.859 12:08:59 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:24.859 12:08:59 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:24.859 12:08:59 json_config -- json_config/common.sh@31 -- # local app=target 00:04:24.859 12:08:59 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:24.859 12:08:59 json_config -- json_config/common.sh@35 -- # [[ -n 1406923 ]] 00:04:24.859 12:08:59 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1406923 00:04:24.859 12:08:59 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:24.859 12:08:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.859 12:08:59 json_config -- json_config/common.sh@41 -- # kill -0 1406923 00:04:24.859 12:08:59 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:25.120 12:08:59 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:25.120 12:08:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.120 12:08:59 json_config -- json_config/common.sh@41 -- # kill -0 1406923 00:04:25.120 12:08:59 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:25.120 12:08:59 json_config -- json_config/common.sh@43 -- # break 00:04:25.120 12:08:59 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:25.120 12:08:59 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:25.120 SPDK target shutdown done 00:04:25.121 12:08:59 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:25.121 INFO: relaunching applications... 00:04:25.121 12:08:59 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:25.121 12:08:59 json_config -- json_config/common.sh@9 -- # local app=target 00:04:25.121 12:08:59 json_config -- json_config/common.sh@10 -- # shift 00:04:25.121 12:08:59 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:25.121 12:08:59 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:25.121 12:08:59 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:25.121 12:08:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.121 12:08:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.121 12:08:59 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1408010 00:04:25.121 12:08:59 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:25.121 Waiting for target to run... 00:04:25.121 12:08:59 json_config -- json_config/common.sh@25 -- # waitforlisten 1408010 /var/tmp/spdk_tgt.sock 00:04:25.121 12:08:59 json_config -- common/autotest_common.sh@831 -- # '[' -z 1408010 ']' 00:04:25.121 12:08:59 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:25.121 12:08:59 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:25.121 12:08:59 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:25.121 12:08:59 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:25.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:25.121 12:08:59 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:25.121 12:08:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.381 [2024-11-04 12:08:59.694111] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:04:25.381 [2024-11-04 12:08:59.694199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1408010 ] 00:04:25.642 [2024-11-04 12:09:00.009761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.642 [2024-11-04 12:09:00.046372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.213 [2024-11-04 12:09:00.564967] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:26.213 [2024-11-04 12:09:00.597326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:26.213 12:09:00 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:26.213 12:09:00 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:26.213 12:09:00 json_config -- json_config/common.sh@26 -- # echo '' 00:04:26.213 00:04:26.213 12:09:00 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:26.213 12:09:00 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:26.213 INFO: Checking if target configuration is the same... 00:04:26.213 12:09:00 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.213 12:09:00 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:26.213 12:09:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:26.213 + '[' 2 -ne 2 ']' 00:04:26.213 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:26.213 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:26.213 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:26.213 +++ basename /dev/fd/62 00:04:26.213 ++ mktemp /tmp/62.XXX 00:04:26.213 + tmp_file_1=/tmp/62.E1U 00:04:26.213 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.213 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:26.213 + tmp_file_2=/tmp/spdk_tgt_config.json.sqg 00:04:26.213 + ret=0 00:04:26.213 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:26.474 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:26.475 + diff -u /tmp/62.E1U /tmp/spdk_tgt_config.json.sqg 00:04:26.475 + echo 'INFO: JSON config files are the same' 00:04:26.475 INFO: JSON config files are the same 00:04:26.475 + rm /tmp/62.E1U /tmp/spdk_tgt_config.json.sqg 00:04:26.475 + exit 0 00:04:26.475 12:09:01 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:26.475 12:09:01 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:26.475 INFO: changing configuration and checking if this can be detected... 00:04:26.475 12:09:01 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:26.475 12:09:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:26.736 12:09:01 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.736 12:09:01 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:26.736 12:09:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:26.736 + '[' 2 -ne 2 ']' 00:04:26.736 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:26.736 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:26.736 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:26.736 +++ basename /dev/fd/62 00:04:26.736 ++ mktemp /tmp/62.XXX 00:04:26.736 + tmp_file_1=/tmp/62.NCs 00:04:26.736 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.736 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:26.736 + tmp_file_2=/tmp/spdk_tgt_config.json.Us0 00:04:26.736 + ret=0 00:04:26.736 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:26.998 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:26.998 + diff -u /tmp/62.NCs /tmp/spdk_tgt_config.json.Us0 00:04:26.998 + ret=1 00:04:26.998 + echo '=== Start of file: /tmp/62.NCs ===' 00:04:26.998 + cat /tmp/62.NCs 00:04:27.260 + echo '=== End of file: /tmp/62.NCs ===' 00:04:27.260 + echo '' 00:04:27.260 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Us0 ===' 00:04:27.260 + cat /tmp/spdk_tgt_config.json.Us0 00:04:27.260 + echo '=== End of file: /tmp/spdk_tgt_config.json.Us0 ===' 00:04:27.260 + echo '' 00:04:27.260 + rm /tmp/62.NCs /tmp/spdk_tgt_config.json.Us0 00:04:27.260 + exit 1 00:04:27.260 12:09:01 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:27.260 INFO: configuration change detected. 00:04:27.260 12:09:01 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:27.260 12:09:01 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:27.260 12:09:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:27.260 12:09:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.260 12:09:01 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:27.260 12:09:01 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:27.260 12:09:01 json_config -- json_config/json_config.sh@324 -- # [[ -n 1408010 ]] 00:04:27.260 12:09:01 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:27.260 12:09:01 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:27.260 12:09:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:27.260 12:09:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.260 12:09:01 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:27.260 12:09:01 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:27.260 12:09:01 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:27.260 12:09:01 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:27.260 12:09:01 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:27.260 12:09:01 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:27.260 12:09:01 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.260 12:09:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.260 12:09:01 json_config -- json_config/json_config.sh@330 -- # killprocess 1408010 00:04:27.260 12:09:01 json_config -- common/autotest_common.sh@950 -- # '[' -z 1408010 ']' 00:04:27.260 12:09:01 json_config -- common/autotest_common.sh@954 -- # kill -0 1408010 00:04:27.260 12:09:01 json_config -- common/autotest_common.sh@955 -- # uname 00:04:27.260 12:09:01 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:27.260 12:09:01 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1408010 00:04:27.260 12:09:01 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:27.260 12:09:01 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:27.260 12:09:01 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1408010' 00:04:27.260 killing process with pid 1408010 00:04:27.260 12:09:01 json_config -- common/autotest_common.sh@969 -- # kill 1408010 00:04:27.260 12:09:01 json_config -- common/autotest_common.sh@974 -- # wait 1408010 00:04:27.522 12:09:01 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:27.522 12:09:01 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:27.522 12:09:01 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.522 12:09:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.522 12:09:02 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:27.522 12:09:02 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:27.522 INFO: Success 00:04:27.522 00:04:27.522 real 0m7.521s 00:04:27.522 user 0m9.050s 00:04:27.522 sys 0m2.013s 00:04:27.522 12:09:02 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.522 12:09:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.522 ************************************ 00:04:27.522 END TEST json_config 00:04:27.522 ************************************ 00:04:27.522 12:09:02 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:27.522 12:09:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.522 12:09:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.522 12:09:02 -- common/autotest_common.sh@10 -- # set +x 00:04:27.785 ************************************ 00:04:27.785 START TEST json_config_extra_key 00:04:27.785 ************************************ 00:04:27.785 12:09:02 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:27.785 12:09:02 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:27.785 12:09:02 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:27.785 12:09:02 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:27.785 12:09:02 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:27.785 12:09:02 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.785 12:09:02 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:27.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.785 --rc genhtml_branch_coverage=1 00:04:27.785 --rc genhtml_function_coverage=1 00:04:27.785 --rc genhtml_legend=1 00:04:27.785 --rc geninfo_all_blocks=1 00:04:27.785 --rc geninfo_unexecuted_blocks=1 00:04:27.785 00:04:27.785 ' 00:04:27.785 12:09:02 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:27.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.785 --rc genhtml_branch_coverage=1 00:04:27.785 --rc genhtml_function_coverage=1 00:04:27.785 --rc genhtml_legend=1 00:04:27.785 --rc geninfo_all_blocks=1 00:04:27.785 --rc geninfo_unexecuted_blocks=1 00:04:27.785 00:04:27.785 ' 00:04:27.785 12:09:02 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:27.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.785 --rc genhtml_branch_coverage=1 00:04:27.785 --rc genhtml_function_coverage=1 00:04:27.785 --rc genhtml_legend=1 00:04:27.785 --rc geninfo_all_blocks=1 00:04:27.785 --rc geninfo_unexecuted_blocks=1 00:04:27.785 00:04:27.785 ' 00:04:27.785 12:09:02 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:27.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.785 --rc genhtml_branch_coverage=1 00:04:27.785 --rc genhtml_function_coverage=1 00:04:27.785 --rc genhtml_legend=1 00:04:27.785 --rc geninfo_all_blocks=1 00:04:27.785 --rc geninfo_unexecuted_blocks=1 00:04:27.785 00:04:27.785 ' 00:04:27.785 12:09:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.785 12:09:02 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.785 12:09:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.785 12:09:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.785 12:09:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.785 12:09:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:27.785 12:09:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:27.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:27.785 12:09:02 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:27.785 12:09:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:27.785 12:09:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:27.785 12:09:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:27.785 12:09:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:27.785 12:09:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:27.785 12:09:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:27.785 12:09:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:27.786 12:09:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:27.786 12:09:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:27.786 12:09:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:27.786 12:09:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:27.786 INFO: launching applications... 00:04:27.786 12:09:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:27.786 12:09:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:27.786 12:09:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:27.786 12:09:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:27.786 12:09:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:27.786 12:09:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:27.786 12:09:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.786 12:09:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.786 12:09:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1408656 00:04:27.786 12:09:02 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:27.786 12:09:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:27.786 Waiting for target to run... 00:04:27.786 12:09:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1408656 /var/tmp/spdk_tgt.sock 00:04:27.786 12:09:02 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1408656 ']' 00:04:27.786 12:09:02 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:27.786 12:09:02 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:27.786 12:09:02 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:27.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:27.786 12:09:02 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:27.786 12:09:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:27.786 [2024-11-04 12:09:02.344697] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:04:27.786 [2024-11-04 12:09:02.344774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1408656 ] 00:04:28.047 [2024-11-04 12:09:02.589107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.308 [2024-11-04 12:09:02.617985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.569 12:09:03 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:28.569 12:09:03 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:28.569 12:09:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:28.569 00:04:28.569 12:09:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:28.569 INFO: shutting down applications... 00:04:28.569 12:09:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:28.569 12:09:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:28.569 12:09:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:28.569 12:09:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1408656 ]] 00:04:28.569 12:09:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1408656 00:04:28.569 12:09:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:28.569 12:09:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:28.569 12:09:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1408656 00:04:28.569 12:09:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:29.140 12:09:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:29.140 12:09:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.140 12:09:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1408656 00:04:29.140 12:09:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:29.140 12:09:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:29.140 12:09:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:29.140 12:09:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:29.140 SPDK target shutdown done 00:04:29.140 12:09:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:29.140 Success 00:04:29.140 00:04:29.140 real 0m1.538s 00:04:29.140 user 0m1.201s 00:04:29.140 sys 0m0.339s 00:04:29.140 12:09:03 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.140 12:09:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:29.140 ************************************ 00:04:29.140 END TEST json_config_extra_key 00:04:29.140 ************************************ 00:04:29.140 12:09:03 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:29.140 12:09:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.140 12:09:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.140 12:09:03 -- common/autotest_common.sh@10 -- # set +x 00:04:29.403 ************************************ 00:04:29.403 START TEST alias_rpc 00:04:29.403 ************************************ 00:04:29.403 12:09:03 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:29.403 * Looking for test storage... 00:04:29.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:29.403 12:09:03 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:29.403 12:09:03 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:29.403 12:09:03 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:29.403 12:09:03 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.403 12:09:03 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:29.403 12:09:03 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.403 12:09:03 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:29.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.403 --rc genhtml_branch_coverage=1 00:04:29.403 --rc genhtml_function_coverage=1 00:04:29.403 --rc genhtml_legend=1 00:04:29.403 --rc geninfo_all_blocks=1 00:04:29.403 --rc geninfo_unexecuted_blocks=1 00:04:29.403 00:04:29.403 ' 00:04:29.403 12:09:03 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:29.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.403 --rc genhtml_branch_coverage=1 00:04:29.403 --rc genhtml_function_coverage=1 00:04:29.403 --rc genhtml_legend=1 00:04:29.403 --rc geninfo_all_blocks=1 00:04:29.403 --rc geninfo_unexecuted_blocks=1 00:04:29.403 00:04:29.403 ' 00:04:29.403 12:09:03 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:29.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.403 --rc genhtml_branch_coverage=1 00:04:29.403 --rc genhtml_function_coverage=1 00:04:29.403 --rc genhtml_legend=1 00:04:29.403 --rc geninfo_all_blocks=1 00:04:29.403 --rc geninfo_unexecuted_blocks=1 00:04:29.403 00:04:29.403 ' 00:04:29.403 12:09:03 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:29.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.403 --rc genhtml_branch_coverage=1 00:04:29.403 --rc genhtml_function_coverage=1 00:04:29.403 --rc genhtml_legend=1 00:04:29.403 --rc geninfo_all_blocks=1 00:04:29.403 --rc geninfo_unexecuted_blocks=1 00:04:29.403 00:04:29.403 ' 00:04:29.403 12:09:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:29.403 12:09:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1409048 00:04:29.403 12:09:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1409048 00:04:29.403 12:09:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.403 12:09:03 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1409048 ']' 00:04:29.403 12:09:03 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.403 12:09:03 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:29.403 12:09:03 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.403 12:09:03 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:29.403 12:09:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.664 [2024-11-04 12:09:03.982132] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:04:29.664 [2024-11-04 12:09:03.982185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409048 ] 00:04:29.664 [2024-11-04 12:09:04.042844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.664 [2024-11-04 12:09:04.078423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.236 12:09:04 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:30.236 12:09:04 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:30.236 12:09:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:30.496 12:09:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1409048 00:04:30.496 12:09:04 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1409048 ']' 00:04:30.496 12:09:04 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1409048 00:04:30.496 12:09:04 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:30.496 12:09:04 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:30.496 12:09:04 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1409048 00:04:30.496 12:09:05 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:30.496 12:09:05 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:30.496 12:09:05 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1409048' 00:04:30.496 killing process with pid 1409048 00:04:30.496 12:09:05 alias_rpc -- common/autotest_common.sh@969 -- # kill 1409048 00:04:30.496 12:09:05 alias_rpc -- common/autotest_common.sh@974 -- # wait 1409048 00:04:30.757 00:04:30.757 real 0m1.526s 00:04:30.757 user 0m1.710s 00:04:30.757 sys 0m0.385s 00:04:30.757 12:09:05 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.757 12:09:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.757 ************************************ 00:04:30.757 END TEST alias_rpc 00:04:30.757 ************************************ 00:04:30.757 12:09:05 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:30.757 12:09:05 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:30.757 12:09:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.757 12:09:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.757 12:09:05 -- common/autotest_common.sh@10 -- # set +x 00:04:30.757 ************************************ 00:04:30.757 START TEST spdkcli_tcp 00:04:30.757 ************************************ 00:04:30.757 12:09:05 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:31.018 * Looking for test storage... 00:04:31.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:31.018 12:09:05 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:31.018 12:09:05 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:31.018 12:09:05 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:31.018 12:09:05 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.018 12:09:05 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:31.018 12:09:05 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.018 12:09:05 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:31.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.018 --rc genhtml_branch_coverage=1 00:04:31.018 --rc genhtml_function_coverage=1 00:04:31.018 --rc genhtml_legend=1 00:04:31.018 --rc geninfo_all_blocks=1 00:04:31.018 --rc geninfo_unexecuted_blocks=1 00:04:31.018 00:04:31.018 ' 00:04:31.018 12:09:05 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:31.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.018 --rc genhtml_branch_coverage=1 00:04:31.018 --rc genhtml_function_coverage=1 00:04:31.018 --rc genhtml_legend=1 00:04:31.018 --rc geninfo_all_blocks=1 00:04:31.018 --rc geninfo_unexecuted_blocks=1 00:04:31.018 00:04:31.018 ' 00:04:31.018 12:09:05 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:31.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.018 --rc genhtml_branch_coverage=1 00:04:31.018 --rc genhtml_function_coverage=1 00:04:31.018 --rc genhtml_legend=1 00:04:31.018 --rc geninfo_all_blocks=1 00:04:31.018 --rc geninfo_unexecuted_blocks=1 00:04:31.018 00:04:31.018 ' 00:04:31.018 12:09:05 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:31.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.018 --rc genhtml_branch_coverage=1 00:04:31.018 --rc genhtml_function_coverage=1 00:04:31.018 --rc genhtml_legend=1 00:04:31.018 --rc geninfo_all_blocks=1 00:04:31.018 --rc geninfo_unexecuted_blocks=1 00:04:31.018 00:04:31.018 ' 00:04:31.018 12:09:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:31.018 12:09:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:31.018 12:09:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:31.018 12:09:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:31.018 12:09:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:31.018 12:09:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:31.018 12:09:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:31.018 12:09:05 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:31.018 12:09:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:31.018 12:09:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1409444 00:04:31.018 12:09:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1409444 00:04:31.018 12:09:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:31.018 12:09:05 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1409444 ']' 00:04:31.018 12:09:05 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.018 12:09:05 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:31.018 12:09:05 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.018 12:09:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:31.018 12:09:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:31.019 [2024-11-04 12:09:05.557330] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:04:31.019 [2024-11-04 12:09:05.557386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409444 ] 00:04:31.280 [2024-11-04 12:09:05.619317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.280 [2024-11-04 12:09:05.656971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.280 [2024-11-04 12:09:05.657106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.852 12:09:06 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.852 12:09:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:31.852 12:09:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1409720 00:04:31.852 12:09:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:31.852 12:09:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:32.113 [ 00:04:32.113 "bdev_malloc_delete", 00:04:32.113 "bdev_malloc_create", 00:04:32.113 "bdev_null_resize", 00:04:32.113 "bdev_null_delete", 00:04:32.113 "bdev_null_create", 00:04:32.113 "bdev_nvme_cuse_unregister", 00:04:32.113 "bdev_nvme_cuse_register", 00:04:32.113 "bdev_opal_new_user", 00:04:32.113 "bdev_opal_set_lock_state", 00:04:32.113 "bdev_opal_delete", 00:04:32.113 "bdev_opal_get_info", 00:04:32.113 "bdev_opal_create", 00:04:32.113 "bdev_nvme_opal_revert", 00:04:32.113 "bdev_nvme_opal_init", 00:04:32.113 "bdev_nvme_send_cmd", 00:04:32.113 "bdev_nvme_set_keys", 00:04:32.113 "bdev_nvme_get_path_iostat", 00:04:32.113 "bdev_nvme_get_mdns_discovery_info", 00:04:32.113 "bdev_nvme_stop_mdns_discovery", 00:04:32.113 "bdev_nvme_start_mdns_discovery", 00:04:32.114 "bdev_nvme_set_multipath_policy", 00:04:32.114 "bdev_nvme_set_preferred_path", 00:04:32.114 "bdev_nvme_get_io_paths", 00:04:32.114 "bdev_nvme_remove_error_injection", 00:04:32.114 "bdev_nvme_add_error_injection", 00:04:32.114 "bdev_nvme_get_discovery_info", 00:04:32.114 "bdev_nvme_stop_discovery", 00:04:32.114 "bdev_nvme_start_discovery", 00:04:32.114 "bdev_nvme_get_controller_health_info", 00:04:32.114 "bdev_nvme_disable_controller", 00:04:32.114 "bdev_nvme_enable_controller", 00:04:32.114 "bdev_nvme_reset_controller", 00:04:32.114 "bdev_nvme_get_transport_statistics", 00:04:32.114 "bdev_nvme_apply_firmware", 00:04:32.114 "bdev_nvme_detach_controller", 00:04:32.114 "bdev_nvme_get_controllers", 00:04:32.114 "bdev_nvme_attach_controller", 00:04:32.114 "bdev_nvme_set_hotplug", 00:04:32.114 "bdev_nvme_set_options", 00:04:32.114 "bdev_passthru_delete", 00:04:32.114 "bdev_passthru_create", 00:04:32.114 "bdev_lvol_set_parent_bdev", 00:04:32.114 "bdev_lvol_set_parent", 00:04:32.114 "bdev_lvol_check_shallow_copy", 00:04:32.114 "bdev_lvol_start_shallow_copy", 00:04:32.114 "bdev_lvol_grow_lvstore", 00:04:32.114 "bdev_lvol_get_lvols", 00:04:32.114 "bdev_lvol_get_lvstores", 00:04:32.114 "bdev_lvol_delete", 00:04:32.114 "bdev_lvol_set_read_only", 00:04:32.114 "bdev_lvol_resize", 00:04:32.114 "bdev_lvol_decouple_parent", 00:04:32.114 "bdev_lvol_inflate", 00:04:32.114 "bdev_lvol_rename", 00:04:32.114 "bdev_lvol_clone_bdev", 00:04:32.114 "bdev_lvol_clone", 00:04:32.114 "bdev_lvol_snapshot", 00:04:32.114 "bdev_lvol_create", 00:04:32.114 "bdev_lvol_delete_lvstore", 00:04:32.114 "bdev_lvol_rename_lvstore", 00:04:32.114 "bdev_lvol_create_lvstore", 00:04:32.114 "bdev_raid_set_options", 00:04:32.114 "bdev_raid_remove_base_bdev", 00:04:32.114 "bdev_raid_add_base_bdev", 00:04:32.114 "bdev_raid_delete", 00:04:32.114 "bdev_raid_create", 00:04:32.114 "bdev_raid_get_bdevs", 00:04:32.114 "bdev_error_inject_error", 00:04:32.114 "bdev_error_delete", 00:04:32.114 "bdev_error_create", 00:04:32.114 "bdev_split_delete", 00:04:32.114 "bdev_split_create", 00:04:32.114 "bdev_delay_delete", 00:04:32.114 "bdev_delay_create", 00:04:32.114 "bdev_delay_update_latency", 00:04:32.114 "bdev_zone_block_delete", 00:04:32.114 "bdev_zone_block_create", 00:04:32.114 "blobfs_create", 00:04:32.114 "blobfs_detect", 00:04:32.114 "blobfs_set_cache_size", 00:04:32.114 "bdev_aio_delete", 00:04:32.114 "bdev_aio_rescan", 00:04:32.114 "bdev_aio_create", 00:04:32.114 "bdev_ftl_set_property", 00:04:32.114 "bdev_ftl_get_properties", 00:04:32.114 "bdev_ftl_get_stats", 00:04:32.114 "bdev_ftl_unmap", 00:04:32.114 "bdev_ftl_unload", 00:04:32.114 "bdev_ftl_delete", 00:04:32.114 "bdev_ftl_load", 00:04:32.114 "bdev_ftl_create", 00:04:32.114 "bdev_virtio_attach_controller", 00:04:32.114 "bdev_virtio_scsi_get_devices", 00:04:32.114 "bdev_virtio_detach_controller", 00:04:32.114 "bdev_virtio_blk_set_hotplug", 00:04:32.114 "bdev_iscsi_delete", 00:04:32.114 "bdev_iscsi_create", 00:04:32.114 "bdev_iscsi_set_options", 00:04:32.114 "accel_error_inject_error", 00:04:32.114 "ioat_scan_accel_module", 00:04:32.114 "dsa_scan_accel_module", 00:04:32.114 "iaa_scan_accel_module", 00:04:32.114 "vfu_virtio_create_fs_endpoint", 00:04:32.114 "vfu_virtio_create_scsi_endpoint", 00:04:32.114 "vfu_virtio_scsi_remove_target", 00:04:32.114 "vfu_virtio_scsi_add_target", 00:04:32.114 "vfu_virtio_create_blk_endpoint", 00:04:32.114 "vfu_virtio_delete_endpoint", 00:04:32.114 "keyring_file_remove_key", 00:04:32.114 "keyring_file_add_key", 00:04:32.114 "keyring_linux_set_options", 00:04:32.114 "fsdev_aio_delete", 00:04:32.114 "fsdev_aio_create", 00:04:32.114 "iscsi_get_histogram", 00:04:32.114 "iscsi_enable_histogram", 00:04:32.114 "iscsi_set_options", 00:04:32.114 "iscsi_get_auth_groups", 00:04:32.114 "iscsi_auth_group_remove_secret", 00:04:32.114 "iscsi_auth_group_add_secret", 00:04:32.114 "iscsi_delete_auth_group", 00:04:32.114 "iscsi_create_auth_group", 00:04:32.114 "iscsi_set_discovery_auth", 00:04:32.114 "iscsi_get_options", 00:04:32.114 "iscsi_target_node_request_logout", 00:04:32.114 "iscsi_target_node_set_redirect", 00:04:32.114 "iscsi_target_node_set_auth", 00:04:32.114 "iscsi_target_node_add_lun", 00:04:32.114 "iscsi_get_stats", 00:04:32.114 "iscsi_get_connections", 00:04:32.114 "iscsi_portal_group_set_auth", 00:04:32.114 "iscsi_start_portal_group", 00:04:32.114 "iscsi_delete_portal_group", 00:04:32.114 "iscsi_create_portal_group", 00:04:32.114 "iscsi_get_portal_groups", 00:04:32.114 "iscsi_delete_target_node", 00:04:32.114 "iscsi_target_node_remove_pg_ig_maps", 00:04:32.114 "iscsi_target_node_add_pg_ig_maps", 00:04:32.114 "iscsi_create_target_node", 00:04:32.114 "iscsi_get_target_nodes", 00:04:32.114 "iscsi_delete_initiator_group", 00:04:32.114 "iscsi_initiator_group_remove_initiators", 00:04:32.114 "iscsi_initiator_group_add_initiators", 00:04:32.114 "iscsi_create_initiator_group", 00:04:32.114 "iscsi_get_initiator_groups", 00:04:32.114 "nvmf_set_crdt", 00:04:32.114 "nvmf_set_config", 00:04:32.114 "nvmf_set_max_subsystems", 00:04:32.114 "nvmf_stop_mdns_prr", 00:04:32.114 "nvmf_publish_mdns_prr", 00:04:32.114 "nvmf_subsystem_get_listeners", 00:04:32.114 "nvmf_subsystem_get_qpairs", 00:04:32.114 "nvmf_subsystem_get_controllers", 00:04:32.114 "nvmf_get_stats", 00:04:32.114 "nvmf_get_transports", 00:04:32.114 "nvmf_create_transport", 00:04:32.114 "nvmf_get_targets", 00:04:32.114 "nvmf_delete_target", 00:04:32.114 "nvmf_create_target", 00:04:32.114 "nvmf_subsystem_allow_any_host", 00:04:32.114 "nvmf_subsystem_set_keys", 00:04:32.114 "nvmf_subsystem_remove_host", 00:04:32.114 "nvmf_subsystem_add_host", 00:04:32.114 "nvmf_ns_remove_host", 00:04:32.114 "nvmf_ns_add_host", 00:04:32.114 "nvmf_subsystem_remove_ns", 00:04:32.114 "nvmf_subsystem_set_ns_ana_group", 00:04:32.114 "nvmf_subsystem_add_ns", 00:04:32.114 "nvmf_subsystem_listener_set_ana_state", 00:04:32.114 "nvmf_discovery_get_referrals", 00:04:32.114 "nvmf_discovery_remove_referral", 00:04:32.114 "nvmf_discovery_add_referral", 00:04:32.114 "nvmf_subsystem_remove_listener", 00:04:32.114 "nvmf_subsystem_add_listener", 00:04:32.114 "nvmf_delete_subsystem", 00:04:32.114 "nvmf_create_subsystem", 00:04:32.114 "nvmf_get_subsystems", 00:04:32.114 "env_dpdk_get_mem_stats", 00:04:32.114 "nbd_get_disks", 00:04:32.114 "nbd_stop_disk", 00:04:32.114 "nbd_start_disk", 00:04:32.114 "ublk_recover_disk", 00:04:32.114 "ublk_get_disks", 00:04:32.114 "ublk_stop_disk", 00:04:32.114 "ublk_start_disk", 00:04:32.114 "ublk_destroy_target", 00:04:32.114 "ublk_create_target", 00:04:32.114 "virtio_blk_create_transport", 00:04:32.114 "virtio_blk_get_transports", 00:04:32.114 "vhost_controller_set_coalescing", 00:04:32.114 "vhost_get_controllers", 00:04:32.114 "vhost_delete_controller", 00:04:32.114 "vhost_create_blk_controller", 00:04:32.114 "vhost_scsi_controller_remove_target", 00:04:32.114 "vhost_scsi_controller_add_target", 00:04:32.114 "vhost_start_scsi_controller", 00:04:32.114 "vhost_create_scsi_controller", 00:04:32.114 "thread_set_cpumask", 00:04:32.114 "scheduler_set_options", 00:04:32.114 "framework_get_governor", 00:04:32.114 "framework_get_scheduler", 00:04:32.114 "framework_set_scheduler", 00:04:32.114 "framework_get_reactors", 00:04:32.114 "thread_get_io_channels", 00:04:32.114 "thread_get_pollers", 00:04:32.114 "thread_get_stats", 00:04:32.114 "framework_monitor_context_switch", 00:04:32.114 "spdk_kill_instance", 00:04:32.114 "log_enable_timestamps", 00:04:32.114 "log_get_flags", 00:04:32.114 "log_clear_flag", 00:04:32.114 "log_set_flag", 00:04:32.114 "log_get_level", 00:04:32.114 "log_set_level", 00:04:32.114 "log_get_print_level", 00:04:32.114 "log_set_print_level", 00:04:32.114 "framework_enable_cpumask_locks", 00:04:32.114 "framework_disable_cpumask_locks", 00:04:32.114 "framework_wait_init", 00:04:32.114 "framework_start_init", 00:04:32.114 "scsi_get_devices", 00:04:32.114 "bdev_get_histogram", 00:04:32.114 "bdev_enable_histogram", 00:04:32.114 "bdev_set_qos_limit", 00:04:32.114 "bdev_set_qd_sampling_period", 00:04:32.114 "bdev_get_bdevs", 00:04:32.114 "bdev_reset_iostat", 00:04:32.114 "bdev_get_iostat", 00:04:32.114 "bdev_examine", 00:04:32.114 "bdev_wait_for_examine", 00:04:32.114 "bdev_set_options", 00:04:32.114 "accel_get_stats", 00:04:32.114 "accel_set_options", 00:04:32.114 "accel_set_driver", 00:04:32.114 "accel_crypto_key_destroy", 00:04:32.114 "accel_crypto_keys_get", 00:04:32.114 "accel_crypto_key_create", 00:04:32.114 "accel_assign_opc", 00:04:32.114 "accel_get_module_info", 00:04:32.114 "accel_get_opc_assignments", 00:04:32.114 "vmd_rescan", 00:04:32.114 "vmd_remove_device", 00:04:32.114 "vmd_enable", 00:04:32.114 "sock_get_default_impl", 00:04:32.114 "sock_set_default_impl", 00:04:32.114 "sock_impl_set_options", 00:04:32.114 "sock_impl_get_options", 00:04:32.114 "iobuf_get_stats", 00:04:32.114 "iobuf_set_options", 00:04:32.114 "keyring_get_keys", 00:04:32.114 "vfu_tgt_set_base_path", 00:04:32.114 "framework_get_pci_devices", 00:04:32.114 "framework_get_config", 00:04:32.114 "framework_get_subsystems", 00:04:32.114 "fsdev_set_opts", 00:04:32.114 "fsdev_get_opts", 00:04:32.114 "trace_get_info", 00:04:32.114 "trace_get_tpoint_group_mask", 00:04:32.114 "trace_disable_tpoint_group", 00:04:32.114 "trace_enable_tpoint_group", 00:04:32.114 "trace_clear_tpoint_mask", 00:04:32.114 "trace_set_tpoint_mask", 00:04:32.114 "notify_get_notifications", 00:04:32.114 "notify_get_types", 00:04:32.114 "spdk_get_version", 00:04:32.114 "rpc_get_methods" 00:04:32.114 ] 00:04:32.115 12:09:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:32.115 12:09:06 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:32.115 12:09:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:32.115 12:09:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:32.115 12:09:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1409444 00:04:32.115 12:09:06 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1409444 ']' 00:04:32.115 12:09:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1409444 00:04:32.115 12:09:06 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:32.115 12:09:06 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:32.115 12:09:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1409444 00:04:32.115 12:09:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:32.115 12:09:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:32.115 12:09:06 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1409444' 00:04:32.115 killing process with pid 1409444 00:04:32.115 12:09:06 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1409444 00:04:32.115 12:09:06 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1409444 00:04:32.375 00:04:32.375 real 0m1.505s 00:04:32.375 user 0m2.811s 00:04:32.375 sys 0m0.417s 00:04:32.375 12:09:06 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.375 12:09:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:32.375 ************************************ 00:04:32.375 END TEST spdkcli_tcp 00:04:32.375 ************************************ 00:04:32.376 12:09:06 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:32.376 12:09:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.376 12:09:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.376 12:09:06 -- common/autotest_common.sh@10 -- # set +x 00:04:32.376 ************************************ 00:04:32.376 START TEST dpdk_mem_utility 00:04:32.376 ************************************ 00:04:32.376 12:09:06 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:32.637 * Looking for test storage... 00:04:32.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:32.637 12:09:06 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:32.637 12:09:06 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:32.637 12:09:06 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:32.637 12:09:07 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.637 12:09:07 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:32.637 12:09:07 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.637 12:09:07 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:32.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.637 --rc genhtml_branch_coverage=1 00:04:32.637 --rc genhtml_function_coverage=1 00:04:32.637 --rc genhtml_legend=1 00:04:32.637 --rc geninfo_all_blocks=1 00:04:32.637 --rc geninfo_unexecuted_blocks=1 00:04:32.637 00:04:32.637 ' 00:04:32.637 12:09:07 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:32.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.637 --rc genhtml_branch_coverage=1 00:04:32.637 --rc genhtml_function_coverage=1 00:04:32.637 --rc genhtml_legend=1 00:04:32.637 --rc geninfo_all_blocks=1 00:04:32.637 --rc geninfo_unexecuted_blocks=1 00:04:32.637 00:04:32.637 ' 00:04:32.637 12:09:07 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:32.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.637 --rc genhtml_branch_coverage=1 00:04:32.637 --rc genhtml_function_coverage=1 00:04:32.637 --rc genhtml_legend=1 00:04:32.637 --rc geninfo_all_blocks=1 00:04:32.637 --rc geninfo_unexecuted_blocks=1 00:04:32.637 00:04:32.637 ' 00:04:32.637 12:09:07 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:32.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.637 --rc genhtml_branch_coverage=1 00:04:32.637 --rc genhtml_function_coverage=1 00:04:32.637 --rc genhtml_legend=1 00:04:32.637 --rc geninfo_all_blocks=1 00:04:32.637 --rc geninfo_unexecuted_blocks=1 00:04:32.637 00:04:32.637 ' 00:04:32.637 12:09:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:32.637 12:09:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1409860 00:04:32.637 12:09:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1409860 00:04:32.637 12:09:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:32.637 12:09:07 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1409860 ']' 00:04:32.637 12:09:07 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.637 12:09:07 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:32.637 12:09:07 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.637 12:09:07 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:32.637 12:09:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:32.637 [2024-11-04 12:09:07.146870] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:04:32.637 [2024-11-04 12:09:07.146945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409860 ] 00:04:32.897 [2024-11-04 12:09:07.210813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.897 [2024-11-04 12:09:07.253765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.470 12:09:07 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:33.470 12:09:07 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:33.470 12:09:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:33.470 12:09:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:33.470 12:09:07 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.470 12:09:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:33.470 { 00:04:33.470 "filename": "/tmp/spdk_mem_dump.txt" 00:04:33.470 } 00:04:33.470 12:09:07 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.470 12:09:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:33.470 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:33.470 1 heaps totaling size 810.000000 MiB 00:04:33.470 size: 810.000000 MiB heap id: 0 00:04:33.470 end heaps---------- 00:04:33.470 9 mempools totaling size 595.772034 MiB 00:04:33.470 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:33.470 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:33.470 size: 92.545471 MiB name: bdev_io_1409860 00:04:33.470 size: 50.003479 MiB name: msgpool_1409860 00:04:33.470 size: 36.509338 MiB name: fsdev_io_1409860 00:04:33.470 size: 21.763794 MiB name: PDU_Pool 00:04:33.470 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:33.470 size: 4.133484 MiB name: evtpool_1409860 00:04:33.470 size: 0.026123 MiB name: Session_Pool 00:04:33.470 end mempools------- 00:04:33.470 6 memzones totaling size 4.142822 MiB 00:04:33.470 size: 1.000366 MiB name: RG_ring_0_1409860 00:04:33.470 size: 1.000366 MiB name: RG_ring_1_1409860 00:04:33.470 size: 1.000366 MiB name: RG_ring_4_1409860 00:04:33.470 size: 1.000366 MiB name: RG_ring_5_1409860 00:04:33.470 size: 0.125366 MiB name: RG_ring_2_1409860 00:04:33.470 size: 0.015991 MiB name: RG_ring_3_1409860 00:04:33.470 end memzones------- 00:04:33.470 12:09:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:33.733 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:33.733 list of free elements. size: 10.862488 MiB 00:04:33.733 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:33.733 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:33.733 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:33.733 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:33.733 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:33.733 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:33.733 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:33.733 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:33.733 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:33.733 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:33.733 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:33.733 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:33.733 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:33.733 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:33.733 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:33.733 list of standard malloc elements. size: 199.218628 MiB 00:04:33.733 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:33.733 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:33.733 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:33.733 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:33.733 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:33.733 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:33.733 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:33.733 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:33.733 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:33.733 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:33.733 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:33.733 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:33.733 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:33.733 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:33.733 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:33.733 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:33.733 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:33.733 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:33.733 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:33.733 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:33.733 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:33.733 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:33.733 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:33.733 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:33.733 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:33.733 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:33.733 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:33.733 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:33.733 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:33.733 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:33.733 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:33.733 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:33.733 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:33.733 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:33.733 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:33.733 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:33.733 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:33.733 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:33.733 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:33.733 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:33.733 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:33.733 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:33.733 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:33.733 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:33.733 list of memzone associated elements. size: 599.918884 MiB 00:04:33.733 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:33.733 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:33.733 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:33.733 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:33.733 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:33.733 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1409860_0 00:04:33.733 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:33.733 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1409860_0 00:04:33.733 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:33.733 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1409860_0 00:04:33.733 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:33.733 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:33.733 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:33.733 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:33.733 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:33.733 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1409860_0 00:04:33.733 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:33.733 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1409860 00:04:33.733 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:33.733 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1409860 00:04:33.733 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:33.733 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:33.733 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:33.733 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:33.733 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:33.733 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:33.733 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:33.733 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:33.733 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:33.733 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1409860 00:04:33.733 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:33.733 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1409860 00:04:33.733 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:33.733 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1409860 00:04:33.733 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:33.733 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1409860 00:04:33.733 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:33.733 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1409860 00:04:33.733 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:33.733 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1409860 00:04:33.733 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:33.733 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:33.733 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:33.733 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:33.733 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:33.733 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:33.733 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:33.733 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1409860 00:04:33.733 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:33.733 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1409860 00:04:33.733 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:33.733 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:33.733 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:33.733 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:33.733 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:33.733 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1409860 00:04:33.733 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:33.733 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:33.733 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:33.733 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1409860 00:04:33.733 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:33.734 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1409860 00:04:33.734 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:33.734 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1409860 00:04:33.734 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:33.734 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:33.734 12:09:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:33.734 12:09:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1409860 00:04:33.734 12:09:08 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1409860 ']' 00:04:33.734 12:09:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1409860 00:04:33.734 12:09:08 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:33.734 12:09:08 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:33.734 12:09:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1409860 00:04:33.734 12:09:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:33.734 12:09:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:33.734 12:09:08 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1409860' 00:04:33.734 killing process with pid 1409860 00:04:33.734 12:09:08 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1409860 00:04:33.734 12:09:08 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1409860 00:04:33.996 00:04:33.996 real 0m1.431s 00:04:33.996 user 0m1.521s 00:04:33.996 sys 0m0.419s 00:04:33.996 12:09:08 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.996 12:09:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:33.996 ************************************ 00:04:33.996 END TEST dpdk_mem_utility 00:04:33.996 ************************************ 00:04:33.996 12:09:08 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:33.996 12:09:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.996 12:09:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.996 12:09:08 -- common/autotest_common.sh@10 -- # set +x 00:04:33.996 ************************************ 00:04:33.996 START TEST event 00:04:33.996 ************************************ 00:04:33.996 12:09:08 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:33.996 * Looking for test storage... 00:04:33.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:33.996 12:09:08 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:33.996 12:09:08 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:33.996 12:09:08 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:34.258 12:09:08 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:34.258 12:09:08 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.258 12:09:08 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.258 12:09:08 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.258 12:09:08 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.258 12:09:08 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.258 12:09:08 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.258 12:09:08 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.258 12:09:08 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.258 12:09:08 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.258 12:09:08 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.258 12:09:08 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.258 12:09:08 event -- scripts/common.sh@344 -- # case "$op" in 00:04:34.258 12:09:08 event -- scripts/common.sh@345 -- # : 1 00:04:34.258 12:09:08 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.258 12:09:08 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.258 12:09:08 event -- scripts/common.sh@365 -- # decimal 1 00:04:34.258 12:09:08 event -- scripts/common.sh@353 -- # local d=1 00:04:34.258 12:09:08 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.258 12:09:08 event -- scripts/common.sh@355 -- # echo 1 00:04:34.258 12:09:08 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.258 12:09:08 event -- scripts/common.sh@366 -- # decimal 2 00:04:34.258 12:09:08 event -- scripts/common.sh@353 -- # local d=2 00:04:34.258 12:09:08 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.258 12:09:08 event -- scripts/common.sh@355 -- # echo 2 00:04:34.258 12:09:08 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.258 12:09:08 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.258 12:09:08 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.258 12:09:08 event -- scripts/common.sh@368 -- # return 0 00:04:34.258 12:09:08 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.258 12:09:08 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:34.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.258 --rc genhtml_branch_coverage=1 00:04:34.258 --rc genhtml_function_coverage=1 00:04:34.258 --rc genhtml_legend=1 00:04:34.258 --rc geninfo_all_blocks=1 00:04:34.258 --rc geninfo_unexecuted_blocks=1 00:04:34.258 00:04:34.258 ' 00:04:34.258 12:09:08 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:34.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.258 --rc genhtml_branch_coverage=1 00:04:34.258 --rc genhtml_function_coverage=1 00:04:34.258 --rc genhtml_legend=1 00:04:34.258 --rc geninfo_all_blocks=1 00:04:34.258 --rc geninfo_unexecuted_blocks=1 00:04:34.258 00:04:34.258 ' 00:04:34.258 12:09:08 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:34.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.258 --rc genhtml_branch_coverage=1 00:04:34.258 --rc genhtml_function_coverage=1 00:04:34.258 --rc genhtml_legend=1 00:04:34.258 --rc geninfo_all_blocks=1 00:04:34.258 --rc geninfo_unexecuted_blocks=1 00:04:34.258 00:04:34.258 ' 00:04:34.258 12:09:08 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:34.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.258 --rc genhtml_branch_coverage=1 00:04:34.258 --rc genhtml_function_coverage=1 00:04:34.258 --rc genhtml_legend=1 00:04:34.258 --rc geninfo_all_blocks=1 00:04:34.258 --rc geninfo_unexecuted_blocks=1 00:04:34.258 00:04:34.258 ' 00:04:34.258 12:09:08 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:34.258 12:09:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:34.258 12:09:08 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:34.258 12:09:08 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:34.258 12:09:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.258 12:09:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:34.258 ************************************ 00:04:34.258 START TEST event_perf 00:04:34.258 ************************************ 00:04:34.258 12:09:08 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:34.258 Running I/O for 1 seconds...[2024-11-04 12:09:08.664577] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:04:34.258 [2024-11-04 12:09:08.664662] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410264 ] 00:04:34.258 [2024-11-04 12:09:08.729829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:34.258 [2024-11-04 12:09:08.770515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.258 [2024-11-04 12:09:08.770645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:34.258 [2024-11-04 12:09:08.770800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.258 Running I/O for 1 seconds...[2024-11-04 12:09:08.770800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:35.646 00:04:35.646 lcore 0: 179241 00:04:35.646 lcore 1: 179239 00:04:35.646 lcore 2: 179237 00:04:35.646 lcore 3: 179240 00:04:35.646 done. 00:04:35.646 00:04:35.646 real 0m1.161s 00:04:35.646 user 0m4.090s 00:04:35.646 sys 0m0.067s 00:04:35.646 12:09:09 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.646 12:09:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:35.646 ************************************ 00:04:35.646 END TEST event_perf 00:04:35.646 ************************************ 00:04:35.646 12:09:09 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:35.646 12:09:09 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:35.646 12:09:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.646 12:09:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.646 ************************************ 00:04:35.646 START TEST event_reactor 00:04:35.646 ************************************ 00:04:35.646 12:09:09 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:35.646 [2024-11-04 12:09:09.904773] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:04:35.646 [2024-11-04 12:09:09.904876] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410855 ] 00:04:35.646 [2024-11-04 12:09:09.969833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.646 [2024-11-04 12:09:10.009630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.588 test_start 00:04:36.588 oneshot 00:04:36.588 tick 100 00:04:36.588 tick 100 00:04:36.588 tick 250 00:04:36.588 tick 100 00:04:36.588 tick 100 00:04:36.588 tick 250 00:04:36.588 tick 100 00:04:36.588 tick 500 00:04:36.588 tick 100 00:04:36.588 tick 100 00:04:36.588 tick 250 00:04:36.588 tick 100 00:04:36.588 tick 100 00:04:36.588 test_end 00:04:36.588 00:04:36.588 real 0m1.160s 00:04:36.588 user 0m1.090s 00:04:36.588 sys 0m0.066s 00:04:36.588 12:09:11 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.588 12:09:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:36.588 ************************************ 00:04:36.588 END TEST event_reactor 00:04:36.588 ************************************ 00:04:36.588 12:09:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:36.588 12:09:11 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:36.588 12:09:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.588 12:09:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.588 ************************************ 00:04:36.588 START TEST event_reactor_perf 00:04:36.588 ************************************ 00:04:36.588 12:09:11 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:36.588 [2024-11-04 12:09:11.140145] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:04:36.588 [2024-11-04 12:09:11.140244] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1411219 ] 00:04:36.848 [2024-11-04 12:09:11.207647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.848 [2024-11-04 12:09:11.245776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.788 test_start 00:04:37.788 test_end 00:04:37.788 Performance: 370325 events per second 00:04:37.788 00:04:37.788 real 0m1.160s 00:04:37.788 user 0m1.087s 00:04:37.788 sys 0m0.068s 00:04:37.788 12:09:12 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.788 12:09:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:37.788 ************************************ 00:04:37.788 END TEST event_reactor_perf 00:04:37.788 ************************************ 00:04:37.788 12:09:12 event -- event/event.sh@49 -- # uname -s 00:04:37.788 12:09:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:37.788 12:09:12 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:37.788 12:09:12 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.788 12:09:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.788 12:09:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.050 ************************************ 00:04:38.050 START TEST event_scheduler 00:04:38.050 ************************************ 00:04:38.050 12:09:12 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:38.050 * Looking for test storage... 00:04:38.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:38.050 12:09:12 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:38.050 12:09:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:38.050 12:09:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:38.050 12:09:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.050 12:09:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:38.051 12:09:12 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.051 12:09:12 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.051 12:09:12 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.051 12:09:12 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:38.051 12:09:12 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.051 12:09:12 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:38.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.051 --rc genhtml_branch_coverage=1 00:04:38.051 --rc genhtml_function_coverage=1 00:04:38.051 --rc genhtml_legend=1 00:04:38.051 --rc geninfo_all_blocks=1 00:04:38.051 --rc geninfo_unexecuted_blocks=1 00:04:38.051 00:04:38.051 ' 00:04:38.051 12:09:12 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:38.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.051 --rc genhtml_branch_coverage=1 00:04:38.051 --rc genhtml_function_coverage=1 00:04:38.051 --rc genhtml_legend=1 00:04:38.051 --rc geninfo_all_blocks=1 00:04:38.051 --rc geninfo_unexecuted_blocks=1 00:04:38.051 00:04:38.051 ' 00:04:38.051 12:09:12 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:38.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.051 --rc genhtml_branch_coverage=1 00:04:38.051 --rc genhtml_function_coverage=1 00:04:38.051 --rc genhtml_legend=1 00:04:38.051 --rc geninfo_all_blocks=1 00:04:38.051 --rc geninfo_unexecuted_blocks=1 00:04:38.051 00:04:38.051 ' 00:04:38.051 12:09:12 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:38.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.051 --rc genhtml_branch_coverage=1 00:04:38.051 --rc genhtml_function_coverage=1 00:04:38.051 --rc genhtml_legend=1 00:04:38.051 --rc geninfo_all_blocks=1 00:04:38.051 --rc geninfo_unexecuted_blocks=1 00:04:38.051 00:04:38.051 ' 00:04:38.051 12:09:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:38.051 12:09:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1411514 00:04:38.051 12:09:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.051 12:09:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1411514 00:04:38.051 12:09:12 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1411514 ']' 00:04:38.051 12:09:12 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.051 12:09:12 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.051 12:09:12 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.051 12:09:12 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.051 12:09:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:38.051 12:09:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:38.051 [2024-11-04 12:09:12.596203] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:04:38.051 [2024-11-04 12:09:12.596273] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1411514 ] 00:04:38.312 [2024-11-04 12:09:12.651595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:38.312 [2024-11-04 12:09:12.692144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.312 [2024-11-04 12:09:12.692303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.312 [2024-11-04 12:09:12.692459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:38.312 [2024-11-04 12:09:12.692460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:38.312 12:09:12 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:38.312 12:09:12 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:38.312 12:09:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:38.312 12:09:12 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.312 12:09:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:38.312 [2024-11-04 12:09:12.732912] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:38.312 [2024-11-04 12:09:12.732926] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:38.312 [2024-11-04 12:09:12.732933] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:38.312 [2024-11-04 12:09:12.732938] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:38.312 [2024-11-04 12:09:12.732942] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:38.312 12:09:12 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.312 12:09:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:38.312 12:09:12 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.312 12:09:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:38.312 [2024-11-04 12:09:12.793577] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:38.312 12:09:12 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.312 12:09:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:38.312 12:09:12 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.312 12:09:12 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.312 12:09:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:38.312 ************************************ 00:04:38.312 START TEST scheduler_create_thread 00:04:38.312 ************************************ 00:04:38.312 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:38.312 12:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:38.312 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.312 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.312 2 00:04:38.312 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.313 12:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:38.313 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.313 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.313 3 00:04:38.313 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.313 12:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:38.313 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.313 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.313 4 00:04:38.313 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.313 12:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:38.313 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.313 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.573 5 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.573 6 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.573 7 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.573 8 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.573 9 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.573 12:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.833 10 00:04:38.833 12:09:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.833 12:09:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:38.833 12:09:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.833 12:09:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.215 12:09:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.215 12:09:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:40.215 12:09:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:40.215 12:09:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.215 12:09:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.159 12:09:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.159 12:09:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:41.159 12:09:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.159 12:09:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.731 12:09:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.731 12:09:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:41.731 12:09:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:41.731 12:09:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.731 12:09:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.671 12:09:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.671 00:04:42.671 real 0m4.225s 00:04:42.671 user 0m0.023s 00:04:42.671 sys 0m0.008s 00:04:42.671 12:09:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.671 12:09:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.671 ************************************ 00:04:42.671 END TEST scheduler_create_thread 00:04:42.671 ************************************ 00:04:42.672 12:09:17 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:42.672 12:09:17 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1411514 00:04:42.672 12:09:17 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1411514 ']' 00:04:42.672 12:09:17 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1411514 00:04:42.672 12:09:17 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:42.672 12:09:17 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.672 12:09:17 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1411514 00:04:42.672 12:09:17 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:42.672 12:09:17 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:42.672 12:09:17 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1411514' 00:04:42.672 killing process with pid 1411514 00:04:42.672 12:09:17 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1411514 00:04:42.672 12:09:17 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1411514 00:04:42.932 [2024-11-04 12:09:17.439093] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:43.194 00:04:43.194 real 0m5.237s 00:04:43.194 user 0m11.129s 00:04:43.194 sys 0m0.365s 00:04:43.194 12:09:17 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.194 12:09:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.194 ************************************ 00:04:43.194 END TEST event_scheduler 00:04:43.194 ************************************ 00:04:43.194 12:09:17 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:43.194 12:09:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:43.194 12:09:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.194 12:09:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.194 12:09:17 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.194 ************************************ 00:04:43.194 START TEST app_repeat 00:04:43.194 ************************************ 00:04:43.194 12:09:17 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:43.194 12:09:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.194 12:09:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.194 12:09:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:43.194 12:09:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.194 12:09:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:43.194 12:09:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:43.194 12:09:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:43.194 12:09:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1412627 00:04:43.194 12:09:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.194 12:09:17 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:43.194 12:09:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1412627' 00:04:43.194 Process app_repeat pid: 1412627 00:04:43.194 12:09:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:43.194 12:09:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:43.194 spdk_app_start Round 0 00:04:43.194 12:09:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1412627 /var/tmp/spdk-nbd.sock 00:04:43.194 12:09:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1412627 ']' 00:04:43.194 12:09:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:43.194 12:09:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.194 12:09:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:43.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:43.194 12:09:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.194 12:09:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:43.194 [2024-11-04 12:09:17.723546] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:04:43.194 [2024-11-04 12:09:17.723654] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1412627 ] 00:04:43.456 [2024-11-04 12:09:17.792578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:43.456 [2024-11-04 12:09:17.831959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.456 [2024-11-04 12:09:17.831962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.456 12:09:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.456 12:09:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:43.456 12:09:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:43.716 Malloc0 00:04:43.716 12:09:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:43.716 Malloc1 00:04:43.716 12:09:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:43.716 12:09:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.716 12:09:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.716 12:09:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:43.716 12:09:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.716 12:09:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:43.716 12:09:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:43.716 12:09:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.716 12:09:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.716 12:09:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:43.716 12:09:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.716 12:09:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:43.716 12:09:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:43.716 12:09:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:43.716 12:09:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.716 12:09:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:43.976 /dev/nbd0 00:04:43.976 12:09:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:43.977 12:09:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:43.977 12:09:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:43.977 12:09:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:43.977 12:09:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:43.977 12:09:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:43.977 12:09:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:43.977 12:09:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:43.977 12:09:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:43.977 12:09:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:43.977 12:09:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:43.977 1+0 records in 00:04:43.977 1+0 records out 00:04:43.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017782 s, 23.0 MB/s 00:04:43.977 12:09:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:43.977 12:09:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:43.977 12:09:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:43.977 12:09:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:43.977 12:09:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:43.977 12:09:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:43.977 12:09:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.977 12:09:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:44.237 /dev/nbd1 00:04:44.237 12:09:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:44.237 12:09:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:44.237 12:09:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:44.237 12:09:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:44.237 12:09:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:44.237 12:09:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:44.237 12:09:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:44.237 12:09:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:44.237 12:09:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:44.237 12:09:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:44.237 12:09:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.237 1+0 records in 00:04:44.237 1+0 records out 00:04:44.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283971 s, 14.4 MB/s 00:04:44.237 12:09:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.237 12:09:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:44.237 12:09:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.237 12:09:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:44.237 12:09:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:44.237 12:09:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.237 12:09:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.237 12:09:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.237 12:09:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.237 12:09:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.498 12:09:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:44.499 { 00:04:44.499 "nbd_device": "/dev/nbd0", 00:04:44.499 "bdev_name": "Malloc0" 00:04:44.499 }, 00:04:44.499 { 00:04:44.499 "nbd_device": "/dev/nbd1", 00:04:44.499 "bdev_name": "Malloc1" 00:04:44.499 } 00:04:44.499 ]' 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:44.499 { 00:04:44.499 "nbd_device": "/dev/nbd0", 00:04:44.499 "bdev_name": "Malloc0" 00:04:44.499 }, 00:04:44.499 { 00:04:44.499 "nbd_device": "/dev/nbd1", 00:04:44.499 "bdev_name": "Malloc1" 00:04:44.499 } 00:04:44.499 ]' 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:44.499 /dev/nbd1' 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:44.499 /dev/nbd1' 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:44.499 256+0 records in 00:04:44.499 256+0 records out 00:04:44.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117405 s, 89.3 MB/s 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:44.499 256+0 records in 00:04:44.499 256+0 records out 00:04:44.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175041 s, 59.9 MB/s 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:44.499 256+0 records in 00:04:44.499 256+0 records out 00:04:44.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017068 s, 61.4 MB/s 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:44.499 12:09:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:44.499 12:09:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:44.499 12:09:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:44.499 12:09:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:44.499 12:09:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:44.499 12:09:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:44.499 12:09:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:44.499 12:09:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:44.499 12:09:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.499 12:09:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.499 12:09:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:44.499 12:09:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:44.499 12:09:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:44.499 12:09:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:44.759 12:09:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:44.759 12:09:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:44.759 12:09:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:44.759 12:09:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:44.759 12:09:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:44.759 12:09:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:44.759 12:09:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:44.759 12:09:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:44.759 12:09:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:44.759 12:09:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:45.020 12:09:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:45.020 12:09:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:45.020 12:09:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:45.020 12:09:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.020 12:09:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.020 12:09:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:45.020 12:09:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.020 12:09:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.020 12:09:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.020 12:09:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.020 12:09:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.020 12:09:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:45.020 12:09:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:45.020 12:09:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.298 12:09:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:45.298 12:09:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:45.298 12:09:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.298 12:09:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:45.298 12:09:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:45.298 12:09:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:45.298 12:09:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:45.298 12:09:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:45.298 12:09:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:45.298 12:09:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:45.298 12:09:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:45.561 [2024-11-04 12:09:19.909120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.561 [2024-11-04 12:09:19.944761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.561 [2024-11-04 12:09:19.944770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.561 [2024-11-04 12:09:19.976227] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:45.561 [2024-11-04 12:09:19.976260] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:48.861 12:09:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:48.861 12:09:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:48.861 spdk_app_start Round 1 00:04:48.861 12:09:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1412627 /var/tmp/spdk-nbd.sock 00:04:48.861 12:09:22 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1412627 ']' 00:04:48.861 12:09:22 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:48.861 12:09:22 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.861 12:09:22 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:48.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:48.861 12:09:22 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.861 12:09:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:48.861 12:09:22 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.861 12:09:22 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:48.861 12:09:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.861 Malloc0 00:04:48.861 12:09:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.861 Malloc1 00:04:48.861 12:09:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.861 12:09:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.861 12:09:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.861 12:09:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:48.861 12:09:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.861 12:09:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:48.861 12:09:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.861 12:09:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.861 12:09:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.861 12:09:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:48.861 12:09:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.861 12:09:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:48.861 12:09:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:48.861 12:09:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:48.862 12:09:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.862 12:09:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:49.121 /dev/nbd0 00:04:49.121 12:09:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:49.121 12:09:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:49.121 12:09:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:49.121 12:09:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:49.121 12:09:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:49.121 12:09:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:49.121 12:09:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:49.121 12:09:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:49.121 12:09:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:49.121 12:09:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:49.121 12:09:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.121 1+0 records in 00:04:49.121 1+0 records out 00:04:49.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 7.6028e-05 s, 53.9 MB/s 00:04:49.121 12:09:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.121 12:09:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:49.121 12:09:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.121 12:09:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:49.121 12:09:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:49.121 12:09:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.121 12:09:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.121 12:09:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:49.382 /dev/nbd1 00:04:49.382 12:09:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:49.382 12:09:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:49.382 12:09:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:49.382 12:09:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:49.382 12:09:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:49.382 12:09:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:49.382 12:09:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:49.382 12:09:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:49.382 12:09:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:49.382 12:09:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:49.382 12:09:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.382 1+0 records in 00:04:49.382 1+0 records out 00:04:49.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276709 s, 14.8 MB/s 00:04:49.382 12:09:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.382 12:09:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:49.382 12:09:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.382 12:09:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:49.382 12:09:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:49.382 12:09:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.382 12:09:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.382 12:09:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.382 12:09:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.382 12:09:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.382 12:09:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:49.382 { 00:04:49.382 "nbd_device": "/dev/nbd0", 00:04:49.382 "bdev_name": "Malloc0" 00:04:49.382 }, 00:04:49.382 { 00:04:49.382 "nbd_device": "/dev/nbd1", 00:04:49.382 "bdev_name": "Malloc1" 00:04:49.382 } 00:04:49.382 ]' 00:04:49.382 12:09:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:49.382 { 00:04:49.382 "nbd_device": "/dev/nbd0", 00:04:49.382 "bdev_name": "Malloc0" 00:04:49.382 }, 00:04:49.382 { 00:04:49.382 "nbd_device": "/dev/nbd1", 00:04:49.382 "bdev_name": "Malloc1" 00:04:49.382 } 00:04:49.382 ]' 00:04:49.382 12:09:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.644 12:09:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:49.644 /dev/nbd1' 00:04:49.644 12:09:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:49.644 /dev/nbd1' 00:04:49.644 12:09:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.644 12:09:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:49.644 12:09:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:49.644 12:09:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:49.644 12:09:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:49.644 12:09:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:49.644 12:09:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.644 12:09:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.644 12:09:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:49.644 12:09:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:49.644 12:09:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:49.644 12:09:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:49.644 256+0 records in 00:04:49.644 256+0 records out 00:04:49.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127508 s, 82.2 MB/s 00:04:49.644 12:09:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.644 12:09:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:49.644 256+0 records in 00:04:49.644 256+0 records out 00:04:49.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165 s, 63.5 MB/s 00:04:49.644 12:09:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.644 12:09:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:49.644 256+0 records in 00:04:49.644 256+0 records out 00:04:49.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191956 s, 54.6 MB/s 00:04:49.644 12:09:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:49.644 12:09:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.644 12:09:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.644 12:09:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:49.644 12:09:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:49.644 12:09:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:49.644 12:09:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:49.644 12:09:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.644 12:09:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:49.644 12:09:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.644 12:09:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:49.644 12:09:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:49.645 12:09:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:49.645 12:09:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.645 12:09:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.645 12:09:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:49.645 12:09:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:49.645 12:09:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.645 12:09:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.904 12:09:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.165 12:09:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:50.165 12:09:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:50.165 12:09:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.165 12:09:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:50.165 12:09:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:50.165 12:09:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.165 12:09:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:50.165 12:09:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:50.165 12:09:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:50.165 12:09:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:50.165 12:09:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:50.165 12:09:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:50.165 12:09:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:50.426 12:09:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:50.426 [2024-11-04 12:09:24.958344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.426 [2024-11-04 12:09:24.993313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.426 [2024-11-04 12:09:24.993315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.686 [2024-11-04 12:09:25.025404] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:50.686 [2024-11-04 12:09:25.025440] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:53.983 12:09:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:53.983 12:09:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:53.983 spdk_app_start Round 2 00:04:53.983 12:09:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1412627 /var/tmp/spdk-nbd.sock 00:04:53.983 12:09:27 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1412627 ']' 00:04:53.983 12:09:27 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:53.983 12:09:27 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.983 12:09:27 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:53.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:53.983 12:09:27 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.983 12:09:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:53.983 12:09:28 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:53.983 12:09:28 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:53.983 12:09:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.983 Malloc0 00:04:53.983 12:09:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.983 Malloc1 00:04:53.983 12:09:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.983 12:09:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.983 12:09:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.983 12:09:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:53.983 12:09:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.983 12:09:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:53.983 12:09:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.983 12:09:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.983 12:09:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.983 12:09:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:53.983 12:09:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.983 12:09:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:53.983 12:09:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:53.983 12:09:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:53.983 12:09:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.983 12:09:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:53.983 /dev/nbd0 00:04:54.243 12:09:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:54.243 12:09:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.243 1+0 records in 00:04:54.243 1+0 records out 00:04:54.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277592 s, 14.8 MB/s 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:54.243 12:09:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.243 12:09:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.243 12:09:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.243 /dev/nbd1 00:04:54.243 12:09:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.243 12:09:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.243 1+0 records in 00:04:54.243 1+0 records out 00:04:54.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245144 s, 16.7 MB/s 00:04:54.243 12:09:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.503 12:09:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:54.503 12:09:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.503 12:09:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:54.503 12:09:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:54.503 12:09:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.503 12:09:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.503 12:09:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.503 12:09:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.503 12:09:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.503 12:09:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.503 { 00:04:54.503 "nbd_device": "/dev/nbd0", 00:04:54.503 "bdev_name": "Malloc0" 00:04:54.503 }, 00:04:54.503 { 00:04:54.503 "nbd_device": "/dev/nbd1", 00:04:54.503 "bdev_name": "Malloc1" 00:04:54.503 } 00:04:54.503 ]' 00:04:54.503 12:09:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.503 { 00:04:54.503 "nbd_device": "/dev/nbd0", 00:04:54.503 "bdev_name": "Malloc0" 00:04:54.503 }, 00:04:54.503 { 00:04:54.503 "nbd_device": "/dev/nbd1", 00:04:54.503 "bdev_name": "Malloc1" 00:04:54.503 } 00:04:54.503 ]' 00:04:54.503 12:09:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.503 12:09:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.503 /dev/nbd1' 00:04:54.503 12:09:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:54.503 /dev/nbd1' 00:04:54.503 12:09:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.503 12:09:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.503 12:09:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.503 12:09:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.503 12:09:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.503 12:09:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.503 12:09:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.503 12:09:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.503 12:09:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.503 12:09:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.503 12:09:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.504 12:09:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.504 256+0 records in 00:04:54.504 256+0 records out 00:04:54.504 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127338 s, 82.3 MB/s 00:04:54.504 12:09:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.504 12:09:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:54.764 256+0 records in 00:04:54.764 256+0 records out 00:04:54.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161251 s, 65.0 MB/s 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:54.764 256+0 records in 00:04:54.764 256+0 records out 00:04:54.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0397091 s, 26.4 MB/s 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:54.764 12:09:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:55.024 12:09:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:55.025 12:09:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:55.025 12:09:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.025 12:09:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.025 12:09:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:55.025 12:09:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.025 12:09:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.025 12:09:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.025 12:09:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:55.025 12:09:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:55.025 12:09:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:55.025 12:09:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:55.025 12:09:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.025 12:09:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.025 12:09:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:55.025 12:09:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.025 12:09:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.025 12:09:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.025 12:09:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.025 12:09:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.285 12:09:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.285 12:09:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.285 12:09:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.285 12:09:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.285 12:09:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.285 12:09:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.285 12:09:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.285 12:09:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.285 12:09:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.285 12:09:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.285 12:09:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.285 12:09:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.285 12:09:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:55.546 12:09:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:55.546 [2024-11-04 12:09:30.053431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.546 [2024-11-04 12:09:30.088596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.547 [2024-11-04 12:09:30.088599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.807 [2024-11-04 12:09:30.120450] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:55.807 [2024-11-04 12:09:30.120488] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:59.111 12:09:32 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1412627 /var/tmp/spdk-nbd.sock 00:04:59.111 12:09:32 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1412627 ']' 00:04:59.111 12:09:32 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.111 12:09:32 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.111 12:09:32 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.111 12:09:32 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.111 12:09:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.111 12:09:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.111 12:09:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:59.111 12:09:33 event.app_repeat -- event/event.sh@39 -- # killprocess 1412627 00:04:59.111 12:09:33 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1412627 ']' 00:04:59.111 12:09:33 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1412627 00:04:59.111 12:09:33 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:04:59.111 12:09:33 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:59.111 12:09:33 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1412627 00:04:59.111 12:09:33 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:59.111 12:09:33 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:59.111 12:09:33 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1412627' 00:04:59.111 killing process with pid 1412627 00:04:59.111 12:09:33 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1412627 00:04:59.111 12:09:33 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1412627 00:04:59.111 spdk_app_start is called in Round 0. 00:04:59.111 Shutdown signal received, stop current app iteration 00:04:59.111 Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 reinitialization... 00:04:59.111 spdk_app_start is called in Round 1. 00:04:59.111 Shutdown signal received, stop current app iteration 00:04:59.111 Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 reinitialization... 00:04:59.111 spdk_app_start is called in Round 2. 00:04:59.111 Shutdown signal received, stop current app iteration 00:04:59.111 Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 reinitialization... 00:04:59.111 spdk_app_start is called in Round 3. 00:04:59.111 Shutdown signal received, stop current app iteration 00:04:59.111 12:09:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:59.111 12:09:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:59.111 00:04:59.111 real 0m15.589s 00:04:59.111 user 0m34.014s 00:04:59.111 sys 0m2.239s 00:04:59.111 12:09:33 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.111 12:09:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.111 ************************************ 00:04:59.111 END TEST app_repeat 00:04:59.111 ************************************ 00:04:59.111 12:09:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:59.111 12:09:33 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:59.111 12:09:33 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.111 12:09:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.111 12:09:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.111 ************************************ 00:04:59.111 START TEST cpu_locks 00:04:59.111 ************************************ 00:04:59.111 12:09:33 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:59.111 * Looking for test storage... 00:04:59.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:59.111 12:09:33 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:59.111 12:09:33 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:59.111 12:09:33 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:59.111 12:09:33 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.111 12:09:33 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:59.111 12:09:33 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.111 12:09:33 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:59.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.111 --rc genhtml_branch_coverage=1 00:04:59.111 --rc genhtml_function_coverage=1 00:04:59.111 --rc genhtml_legend=1 00:04:59.111 --rc geninfo_all_blocks=1 00:04:59.111 --rc geninfo_unexecuted_blocks=1 00:04:59.111 00:04:59.111 ' 00:04:59.111 12:09:33 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:59.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.111 --rc genhtml_branch_coverage=1 00:04:59.111 --rc genhtml_function_coverage=1 00:04:59.111 --rc genhtml_legend=1 00:04:59.111 --rc geninfo_all_blocks=1 00:04:59.111 --rc geninfo_unexecuted_blocks=1 00:04:59.111 00:04:59.111 ' 00:04:59.111 12:09:33 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:59.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.111 --rc genhtml_branch_coverage=1 00:04:59.111 --rc genhtml_function_coverage=1 00:04:59.111 --rc genhtml_legend=1 00:04:59.111 --rc geninfo_all_blocks=1 00:04:59.112 --rc geninfo_unexecuted_blocks=1 00:04:59.112 00:04:59.112 ' 00:04:59.112 12:09:33 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:59.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.112 --rc genhtml_branch_coverage=1 00:04:59.112 --rc genhtml_function_coverage=1 00:04:59.112 --rc genhtml_legend=1 00:04:59.112 --rc geninfo_all_blocks=1 00:04:59.112 --rc geninfo_unexecuted_blocks=1 00:04:59.112 00:04:59.112 ' 00:04:59.112 12:09:33 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:59.112 12:09:33 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:59.112 12:09:33 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:59.112 12:09:33 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:59.112 12:09:33 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.112 12:09:33 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.112 12:09:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.112 ************************************ 00:04:59.112 START TEST default_locks 00:04:59.112 ************************************ 00:04:59.112 12:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:04:59.112 12:09:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1416140 00:04:59.112 12:09:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1416140 00:04:59.112 12:09:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.112 12:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1416140 ']' 00:04:59.112 12:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.112 12:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.112 12:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.112 12:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.112 12:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.112 [2024-11-04 12:09:33.649088] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:04:59.112 [2024-11-04 12:09:33.649138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1416140 ] 00:04:59.372 [2024-11-04 12:09:33.710446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.372 [2024-11-04 12:09:33.746102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.943 12:09:34 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.943 12:09:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:04:59.943 12:09:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1416140 00:04:59.943 12:09:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1416140 00:04:59.943 12:09:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.513 lslocks: write error 00:05:00.513 12:09:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1416140 00:05:00.513 12:09:34 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1416140 ']' 00:05:00.513 12:09:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1416140 00:05:00.514 12:09:34 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:00.514 12:09:34 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:00.514 12:09:34 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1416140 00:05:00.514 12:09:34 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:00.514 12:09:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:00.514 12:09:34 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1416140' 00:05:00.514 killing process with pid 1416140 00:05:00.514 12:09:34 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1416140 00:05:00.514 12:09:34 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1416140 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1416140 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1416140 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1416140 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1416140 ']' 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1416140) - No such process 00:05:00.821 ERROR: process (pid: 1416140) is no longer running 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:00.821 00:05:00.821 real 0m1.535s 00:05:00.821 user 0m1.683s 00:05:00.821 sys 0m0.487s 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.821 12:09:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.821 ************************************ 00:05:00.821 END TEST default_locks 00:05:00.821 ************************************ 00:05:00.821 12:09:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:00.821 12:09:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.822 12:09:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.822 12:09:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.822 ************************************ 00:05:00.822 START TEST default_locks_via_rpc 00:05:00.822 ************************************ 00:05:00.822 12:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:00.822 12:09:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1416508 00:05:00.822 12:09:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1416508 00:05:00.822 12:09:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.822 12:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1416508 ']' 00:05:00.822 12:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.822 12:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.822 12:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.822 12:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.822 12:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.822 [2024-11-04 12:09:35.268347] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:05:00.822 [2024-11-04 12:09:35.268401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1416508 ] 00:05:00.822 [2024-11-04 12:09:35.329722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.106 [2024-11-04 12:09:35.369379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.703 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.703 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:01.703 12:09:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:01.703 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.703 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.703 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.703 12:09:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:01.703 12:09:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:01.703 12:09:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:01.703 12:09:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:01.703 12:09:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:01.703 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.703 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.703 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.703 12:09:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1416508 00:05:01.703 12:09:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1416508 00:05:01.703 12:09:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.276 12:09:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1416508 00:05:02.276 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1416508 ']' 00:05:02.276 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1416508 00:05:02.276 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:02.276 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:02.276 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1416508 00:05:02.276 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:02.276 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:02.276 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1416508' 00:05:02.276 killing process with pid 1416508 00:05:02.276 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1416508 00:05:02.276 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1416508 00:05:02.538 00:05:02.538 real 0m1.656s 00:05:02.538 user 0m1.789s 00:05:02.538 sys 0m0.545s 00:05:02.538 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.538 12:09:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.538 ************************************ 00:05:02.538 END TEST default_locks_via_rpc 00:05:02.538 ************************************ 00:05:02.538 12:09:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:02.538 12:09:36 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.538 12:09:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.538 12:09:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.538 ************************************ 00:05:02.538 START TEST non_locking_app_on_locked_coremask 00:05:02.538 ************************************ 00:05:02.538 12:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:02.538 12:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1416886 00:05:02.538 12:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1416886 /var/tmp/spdk.sock 00:05:02.538 12:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.538 12:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1416886 ']' 00:05:02.538 12:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.538 12:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.538 12:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.538 12:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.538 12:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.538 [2024-11-04 12:09:36.992687] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:05:02.538 [2024-11-04 12:09:36.992737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1416886 ] 00:05:02.538 [2024-11-04 12:09:37.054521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.538 [2024-11-04 12:09:37.093970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.479 12:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:03.479 12:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:03.479 12:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:03.479 12:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1416945 00:05:03.479 12:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1416945 /var/tmp/spdk2.sock 00:05:03.480 12:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1416945 ']' 00:05:03.480 12:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.480 12:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.480 12:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.480 12:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.480 12:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.480 [2024-11-04 12:09:37.799791] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:05:03.480 [2024-11-04 12:09:37.799844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1416945 ] 00:05:03.480 [2024-11-04 12:09:37.889532] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.480 [2024-11-04 12:09:37.889562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.480 [2024-11-04 12:09:37.962143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.052 12:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.052 12:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:04.053 12:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1416886 00:05:04.053 12:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1416886 00:05:04.053 12:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:04.624 lslocks: write error 00:05:04.624 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1416886 00:05:04.624 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1416886 ']' 00:05:04.624 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1416886 00:05:04.624 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:04.624 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:04.624 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1416886 00:05:04.624 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:04.624 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:04.624 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1416886' 00:05:04.624 killing process with pid 1416886 00:05:04.624 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1416886 00:05:04.624 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1416886 00:05:05.196 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1416945 00:05:05.196 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1416945 ']' 00:05:05.196 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1416945 00:05:05.196 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:05.196 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:05.196 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1416945 00:05:05.196 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:05.196 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:05.196 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1416945' 00:05:05.196 killing process with pid 1416945 00:05:05.196 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1416945 00:05:05.196 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1416945 00:05:05.457 00:05:05.457 real 0m2.832s 00:05:05.457 user 0m3.140s 00:05:05.457 sys 0m0.838s 00:05:05.457 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.457 12:09:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.457 ************************************ 00:05:05.457 END TEST non_locking_app_on_locked_coremask 00:05:05.457 ************************************ 00:05:05.457 12:09:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:05.457 12:09:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.457 12:09:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.457 12:09:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.457 ************************************ 00:05:05.457 START TEST locking_app_on_unlocked_coremask 00:05:05.457 ************************************ 00:05:05.457 12:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:05.457 12:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1417534 00:05:05.457 12:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1417534 /var/tmp/spdk.sock 00:05:05.457 12:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:05.457 12:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1417534 ']' 00:05:05.457 12:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.457 12:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.457 12:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.457 12:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.457 12:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.457 [2024-11-04 12:09:39.904169] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:05:05.457 [2024-11-04 12:09:39.904225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1417534 ] 00:05:05.457 [2024-11-04 12:09:39.967841] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:05.457 [2024-11-04 12:09:39.967885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.457 [2024-11-04 12:09:40.007818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.400 12:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.400 12:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:06.400 12:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1417611 00:05:06.400 12:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:06.400 12:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1417611 /var/tmp/spdk2.sock 00:05:06.400 12:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1417611 ']' 00:05:06.400 12:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:06.400 12:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.400 12:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:06.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:06.401 12:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.401 12:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.401 [2024-11-04 12:09:40.752608] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:05:06.401 [2024-11-04 12:09:40.752660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1417611 ] 00:05:06.401 [2024-11-04 12:09:40.841508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.401 [2024-11-04 12:09:40.919753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.974 12:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.974 12:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:06.974 12:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1417611 00:05:07.235 12:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1417611 00:05:07.235 12:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.806 lslocks: write error 00:05:07.806 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1417534 00:05:07.806 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1417534 ']' 00:05:07.806 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1417534 00:05:07.806 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:07.806 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.806 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1417534 00:05:07.806 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:07.806 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:07.806 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1417534' 00:05:07.806 killing process with pid 1417534 00:05:07.806 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1417534 00:05:07.806 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1417534 00:05:08.067 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1417611 00:05:08.067 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1417611 ']' 00:05:08.067 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1417611 00:05:08.067 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:08.067 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.067 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1417611 00:05:08.067 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:08.067 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:08.067 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1417611' 00:05:08.067 killing process with pid 1417611 00:05:08.067 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1417611 00:05:08.067 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1417611 00:05:08.329 00:05:08.329 real 0m2.982s 00:05:08.329 user 0m3.306s 00:05:08.329 sys 0m0.912s 00:05:08.329 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.329 12:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.329 ************************************ 00:05:08.329 END TEST locking_app_on_unlocked_coremask 00:05:08.329 ************************************ 00:05:08.329 12:09:42 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:08.329 12:09:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.329 12:09:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.329 12:09:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.590 ************************************ 00:05:08.590 START TEST locking_app_on_locked_coremask 00:05:08.590 ************************************ 00:05:08.590 12:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:08.590 12:09:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1418075 00:05:08.590 12:09:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1418075 /var/tmp/spdk.sock 00:05:08.590 12:09:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.590 12:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1418075 ']' 00:05:08.590 12:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.590 12:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.590 12:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.590 12:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.590 12:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.590 [2024-11-04 12:09:42.977817] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:05:08.590 [2024-11-04 12:09:42.977878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418075 ] 00:05:08.590 [2024-11-04 12:09:43.043501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.590 [2024-11-04 12:09:43.085445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.532 12:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.532 12:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:09.532 12:09:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:09.532 12:09:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1418321 00:05:09.532 12:09:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1418321 /var/tmp/spdk2.sock 00:05:09.532 12:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:09.532 12:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1418321 /var/tmp/spdk2.sock 00:05:09.532 12:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:09.532 12:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:09.532 12:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:09.532 12:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:09.532 12:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1418321 /var/tmp/spdk2.sock 00:05:09.532 12:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1418321 ']' 00:05:09.532 12:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.532 12:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.532 12:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.532 12:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.532 12:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.532 [2024-11-04 12:09:43.809087] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:05:09.532 [2024-11-04 12:09:43.809140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418321 ] 00:05:09.532 [2024-11-04 12:09:43.899277] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1418075 has claimed it. 00:05:09.532 [2024-11-04 12:09:43.899319] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:10.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1418321) - No such process 00:05:10.103 ERROR: process (pid: 1418321) is no longer running 00:05:10.103 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.103 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:10.103 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:10.103 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:10.103 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:10.103 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:10.103 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1418075 00:05:10.103 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1418075 00:05:10.103 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:10.675 lslocks: write error 00:05:10.675 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1418075 00:05:10.675 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1418075 ']' 00:05:10.675 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1418075 00:05:10.675 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:10.675 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.675 12:09:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1418075 00:05:10.675 12:09:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.675 12:09:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.675 12:09:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1418075' 00:05:10.675 killing process with pid 1418075 00:05:10.675 12:09:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1418075 00:05:10.675 12:09:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1418075 00:05:10.675 00:05:10.675 real 0m2.297s 00:05:10.675 user 0m2.583s 00:05:10.675 sys 0m0.622s 00:05:10.675 12:09:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.675 12:09:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.675 ************************************ 00:05:10.675 END TEST locking_app_on_locked_coremask 00:05:10.675 ************************************ 00:05:10.675 12:09:45 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:10.675 12:09:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.936 12:09:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.936 12:09:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.936 ************************************ 00:05:10.936 START TEST locking_overlapped_coremask 00:05:10.936 ************************************ 00:05:10.936 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:10.937 12:09:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1418680 00:05:10.937 12:09:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1418680 /var/tmp/spdk.sock 00:05:10.937 12:09:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:10.937 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1418680 ']' 00:05:10.937 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.937 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.937 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.937 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.937 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.937 [2024-11-04 12:09:45.334149] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:05:10.937 [2024-11-04 12:09:45.334197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418680 ] 00:05:10.937 [2024-11-04 12:09:45.394855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:10.937 [2024-11-04 12:09:45.431944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.937 [2024-11-04 12:09:45.432062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.937 [2024-11-04 12:09:45.432065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.198 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.198 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:11.198 12:09:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1418685 00:05:11.198 12:09:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1418685 /var/tmp/spdk2.sock 00:05:11.198 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:11.198 12:09:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:11.198 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1418685 /var/tmp/spdk2.sock 00:05:11.198 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:11.198 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:11.198 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:11.198 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:11.198 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1418685 /var/tmp/spdk2.sock 00:05:11.198 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1418685 ']' 00:05:11.198 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.198 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.198 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.198 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.198 12:09:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.198 [2024-11-04 12:09:45.675671] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:05:11.198 [2024-11-04 12:09:45.675719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418685 ] 00:05:11.198 [2024-11-04 12:09:45.748771] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1418680 has claimed it. 00:05:11.198 [2024-11-04 12:09:45.748805] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:11.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1418685) - No such process 00:05:11.769 ERROR: process (pid: 1418685) is no longer running 00:05:11.769 12:09:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.769 12:09:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:11.769 12:09:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:11.769 12:09:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:11.769 12:09:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:11.769 12:09:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:11.769 12:09:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:11.770 12:09:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:11.770 12:09:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:11.770 12:09:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:11.770 12:09:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1418680 00:05:11.770 12:09:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1418680 ']' 00:05:11.770 12:09:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1418680 00:05:11.770 12:09:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:11.770 12:09:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.770 12:09:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1418680 00:05:12.030 12:09:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:12.030 12:09:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:12.030 12:09:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1418680' 00:05:12.030 killing process with pid 1418680 00:05:12.030 12:09:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1418680 00:05:12.030 12:09:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1418680 00:05:12.030 00:05:12.030 real 0m1.288s 00:05:12.030 user 0m3.586s 00:05:12.030 sys 0m0.352s 00:05:12.030 12:09:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.030 12:09:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.030 ************************************ 00:05:12.030 END TEST locking_overlapped_coremask 00:05:12.030 ************************************ 00:05:12.291 12:09:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:12.292 12:09:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.292 12:09:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.292 12:09:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.292 ************************************ 00:05:12.292 START TEST locking_overlapped_coremask_via_rpc 00:05:12.292 ************************************ 00:05:12.292 12:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:12.292 12:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1419010 00:05:12.292 12:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1419010 /var/tmp/spdk.sock 00:05:12.292 12:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:12.292 12:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1419010 ']' 00:05:12.292 12:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.292 12:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.292 12:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.292 12:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.292 12:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.292 [2024-11-04 12:09:46.697580] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:05:12.292 [2024-11-04 12:09:46.697629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419010 ] 00:05:12.292 [2024-11-04 12:09:46.759653] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.292 [2024-11-04 12:09:46.759693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.292 [2024-11-04 12:09:46.799279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.292 [2024-11-04 12:09:46.799394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.292 [2024-11-04 12:09:46.799397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.234 12:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.234 12:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:13.234 12:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1419065 00:05:13.235 12:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1419065 /var/tmp/spdk2.sock 00:05:13.235 12:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:13.235 12:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1419065 ']' 00:05:13.235 12:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.235 12:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.235 12:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.235 12:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.235 12:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.235 [2024-11-04 12:09:47.552610] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:05:13.235 [2024-11-04 12:09:47.552668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419065 ] 00:05:13.235 [2024-11-04 12:09:47.627795] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:13.235 [2024-11-04 12:09:47.627821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.235 [2024-11-04 12:09:47.687188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.235 [2024-11-04 12:09:47.690868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.235 [2024-11-04 12:09:47.690870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.807 [2024-11-04 12:09:48.343814] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1419010 has claimed it. 00:05:13.807 request: 00:05:13.807 { 00:05:13.807 "method": "framework_enable_cpumask_locks", 00:05:13.807 "req_id": 1 00:05:13.807 } 00:05:13.807 Got JSON-RPC error response 00:05:13.807 response: 00:05:13.807 { 00:05:13.807 "code": -32603, 00:05:13.807 "message": "Failed to claim CPU core: 2" 00:05:13.807 } 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1419010 /var/tmp/spdk.sock 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1419010 ']' 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.807 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.068 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.068 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:14.068 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1419065 /var/tmp/spdk2.sock 00:05:14.068 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1419065 ']' 00:05:14.068 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.068 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.068 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.068 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.068 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.329 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.329 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:14.329 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:14.329 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:14.329 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:14.329 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:14.329 00:05:14.329 real 0m2.078s 00:05:14.329 user 0m0.852s 00:05:14.329 sys 0m0.153s 00:05:14.329 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.329 12:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.329 ************************************ 00:05:14.329 END TEST locking_overlapped_coremask_via_rpc 00:05:14.329 ************************************ 00:05:14.329 12:09:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:14.329 12:09:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1419010 ]] 00:05:14.329 12:09:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1419010 00:05:14.330 12:09:48 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1419010 ']' 00:05:14.330 12:09:48 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1419010 00:05:14.330 12:09:48 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:14.330 12:09:48 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.330 12:09:48 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1419010 00:05:14.330 12:09:48 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.330 12:09:48 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.330 12:09:48 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1419010' 00:05:14.330 killing process with pid 1419010 00:05:14.330 12:09:48 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1419010 00:05:14.330 12:09:48 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1419010 00:05:14.590 12:09:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1419065 ]] 00:05:14.590 12:09:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1419065 00:05:14.590 12:09:49 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1419065 ']' 00:05:14.590 12:09:49 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1419065 00:05:14.590 12:09:49 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:14.590 12:09:49 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.590 12:09:49 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1419065 00:05:14.590 12:09:49 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:14.590 12:09:49 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:14.590 12:09:49 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1419065' 00:05:14.590 killing process with pid 1419065 00:05:14.590 12:09:49 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1419065 00:05:14.590 12:09:49 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1419065 00:05:14.851 12:09:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:14.851 12:09:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:14.851 12:09:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1419010 ]] 00:05:14.851 12:09:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1419010 00:05:14.851 12:09:49 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1419010 ']' 00:05:14.851 12:09:49 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1419010 00:05:14.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1419010) - No such process 00:05:14.851 12:09:49 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1419010 is not found' 00:05:14.851 Process with pid 1419010 is not found 00:05:14.851 12:09:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1419065 ]] 00:05:14.851 12:09:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1419065 00:05:14.851 12:09:49 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1419065 ']' 00:05:14.851 12:09:49 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1419065 00:05:14.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1419065) - No such process 00:05:14.851 12:09:49 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1419065 is not found' 00:05:14.851 Process with pid 1419065 is not found 00:05:14.851 12:09:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:14.851 00:05:14.851 real 0m15.943s 00:05:14.851 user 0m27.156s 00:05:14.851 sys 0m4.842s 00:05:14.851 12:09:49 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.851 12:09:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.851 ************************************ 00:05:14.851 END TEST cpu_locks 00:05:14.851 ************************************ 00:05:14.851 00:05:14.851 real 0m40.933s 00:05:14.851 user 1m18.839s 00:05:14.851 sys 0m8.094s 00:05:14.851 12:09:49 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.851 12:09:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.851 ************************************ 00:05:14.851 END TEST event 00:05:14.851 ************************************ 00:05:14.851 12:09:49 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:14.851 12:09:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.851 12:09:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.851 12:09:49 -- common/autotest_common.sh@10 -- # set +x 00:05:14.851 ************************************ 00:05:14.851 START TEST thread 00:05:14.851 ************************************ 00:05:14.851 12:09:49 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:15.112 * Looking for test storage... 00:05:15.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:15.112 12:09:49 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:15.112 12:09:49 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:15.112 12:09:49 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:15.112 12:09:49 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:15.112 12:09:49 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.112 12:09:49 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.112 12:09:49 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.112 12:09:49 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.112 12:09:49 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.112 12:09:49 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.112 12:09:49 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.112 12:09:49 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.112 12:09:49 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.112 12:09:49 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.112 12:09:49 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.112 12:09:49 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:15.112 12:09:49 thread -- scripts/common.sh@345 -- # : 1 00:05:15.112 12:09:49 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.112 12:09:49 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.112 12:09:49 thread -- scripts/common.sh@365 -- # decimal 1 00:05:15.112 12:09:49 thread -- scripts/common.sh@353 -- # local d=1 00:05:15.112 12:09:49 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.112 12:09:49 thread -- scripts/common.sh@355 -- # echo 1 00:05:15.112 12:09:49 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.112 12:09:49 thread -- scripts/common.sh@366 -- # decimal 2 00:05:15.112 12:09:49 thread -- scripts/common.sh@353 -- # local d=2 00:05:15.112 12:09:49 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.112 12:09:49 thread -- scripts/common.sh@355 -- # echo 2 00:05:15.112 12:09:49 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.112 12:09:49 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.112 12:09:49 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.112 12:09:49 thread -- scripts/common.sh@368 -- # return 0 00:05:15.112 12:09:49 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.112 12:09:49 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:15.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.112 --rc genhtml_branch_coverage=1 00:05:15.112 --rc genhtml_function_coverage=1 00:05:15.112 --rc genhtml_legend=1 00:05:15.112 --rc geninfo_all_blocks=1 00:05:15.112 --rc geninfo_unexecuted_blocks=1 00:05:15.112 00:05:15.112 ' 00:05:15.112 12:09:49 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:15.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.112 --rc genhtml_branch_coverage=1 00:05:15.112 --rc genhtml_function_coverage=1 00:05:15.112 --rc genhtml_legend=1 00:05:15.112 --rc geninfo_all_blocks=1 00:05:15.112 --rc geninfo_unexecuted_blocks=1 00:05:15.112 00:05:15.112 ' 00:05:15.112 12:09:49 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:15.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.112 --rc genhtml_branch_coverage=1 00:05:15.112 --rc genhtml_function_coverage=1 00:05:15.112 --rc genhtml_legend=1 00:05:15.112 --rc geninfo_all_blocks=1 00:05:15.112 --rc geninfo_unexecuted_blocks=1 00:05:15.112 00:05:15.112 ' 00:05:15.112 12:09:49 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:15.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.112 --rc genhtml_branch_coverage=1 00:05:15.112 --rc genhtml_function_coverage=1 00:05:15.112 --rc genhtml_legend=1 00:05:15.112 --rc geninfo_all_blocks=1 00:05:15.112 --rc geninfo_unexecuted_blocks=1 00:05:15.112 00:05:15.112 ' 00:05:15.112 12:09:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:15.112 12:09:49 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:15.112 12:09:49 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.112 12:09:49 thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.112 ************************************ 00:05:15.112 START TEST thread_poller_perf 00:05:15.112 ************************************ 00:05:15.112 12:09:49 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:15.112 [2024-11-04 12:09:49.666842] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:05:15.112 [2024-11-04 12:09:49.666939] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419642 ] 00:05:15.374 [2024-11-04 12:09:49.732642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.374 [2024-11-04 12:09:49.768799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.374 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:16.316 [2024-11-04T11:09:50.886Z] ====================================== 00:05:16.316 [2024-11-04T11:09:50.886Z] busy:2412048874 (cyc) 00:05:16.316 [2024-11-04T11:09:50.886Z] total_run_count: 287000 00:05:16.316 [2024-11-04T11:09:50.886Z] tsc_hz: 2400000000 (cyc) 00:05:16.316 [2024-11-04T11:09:50.886Z] ====================================== 00:05:16.316 [2024-11-04T11:09:50.886Z] poller_cost: 8404 (cyc), 3501 (nsec) 00:05:16.316 00:05:16.316 real 0m1.167s 00:05:16.316 user 0m1.096s 00:05:16.316 sys 0m0.067s 00:05:16.316 12:09:50 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.316 12:09:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.316 ************************************ 00:05:16.316 END TEST thread_poller_perf 00:05:16.316 ************************************ 00:05:16.316 12:09:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:16.316 12:09:50 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:16.316 12:09:50 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.316 12:09:50 thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.576 ************************************ 00:05:16.576 START TEST thread_poller_perf 00:05:16.576 ************************************ 00:05:16.576 12:09:50 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:16.576 [2024-11-04 12:09:50.909798] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:05:16.577 [2024-11-04 12:09:50.909894] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419861 ] 00:05:16.577 [2024-11-04 12:09:50.974179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.577 [2024-11-04 12:09:51.010742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.577 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:17.521 [2024-11-04T11:09:52.091Z] ====================================== 00:05:17.521 [2024-11-04T11:09:52.091Z] busy:2402106252 (cyc) 00:05:17.521 [2024-11-04T11:09:52.091Z] total_run_count: 3807000 00:05:17.521 [2024-11-04T11:09:52.091Z] tsc_hz: 2400000000 (cyc) 00:05:17.521 [2024-11-04T11:09:52.091Z] ====================================== 00:05:17.521 [2024-11-04T11:09:52.091Z] poller_cost: 630 (cyc), 262 (nsec) 00:05:17.521 00:05:17.521 real 0m1.156s 00:05:17.521 user 0m1.084s 00:05:17.521 sys 0m0.068s 00:05:17.521 12:09:52 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.521 12:09:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:17.521 ************************************ 00:05:17.521 END TEST thread_poller_perf 00:05:17.521 ************************************ 00:05:17.521 12:09:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:17.521 00:05:17.521 real 0m2.668s 00:05:17.521 user 0m2.354s 00:05:17.521 sys 0m0.325s 00:05:17.521 12:09:52 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.521 12:09:52 thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.521 ************************************ 00:05:17.521 END TEST thread 00:05:17.521 ************************************ 00:05:17.781 12:09:52 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:17.781 12:09:52 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:17.781 12:09:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.782 12:09:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.782 12:09:52 -- common/autotest_common.sh@10 -- # set +x 00:05:17.782 ************************************ 00:05:17.782 START TEST app_cmdline 00:05:17.782 ************************************ 00:05:17.782 12:09:52 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:17.782 * Looking for test storage... 00:05:17.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:17.782 12:09:52 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:17.782 12:09:52 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:17.782 12:09:52 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:17.782 12:09:52 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.782 12:09:52 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:18.043 12:09:52 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.043 12:09:52 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.043 12:09:52 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.043 12:09:52 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:18.043 12:09:52 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.043 12:09:52 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:18.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.043 --rc genhtml_branch_coverage=1 00:05:18.043 --rc genhtml_function_coverage=1 00:05:18.043 --rc genhtml_legend=1 00:05:18.043 --rc geninfo_all_blocks=1 00:05:18.043 --rc geninfo_unexecuted_blocks=1 00:05:18.043 00:05:18.043 ' 00:05:18.043 12:09:52 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:18.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.043 --rc genhtml_branch_coverage=1 00:05:18.043 --rc genhtml_function_coverage=1 00:05:18.043 --rc genhtml_legend=1 00:05:18.043 --rc geninfo_all_blocks=1 00:05:18.043 --rc geninfo_unexecuted_blocks=1 00:05:18.043 00:05:18.043 ' 00:05:18.043 12:09:52 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:18.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.043 --rc genhtml_branch_coverage=1 00:05:18.043 --rc genhtml_function_coverage=1 00:05:18.043 --rc genhtml_legend=1 00:05:18.043 --rc geninfo_all_blocks=1 00:05:18.043 --rc geninfo_unexecuted_blocks=1 00:05:18.043 00:05:18.043 ' 00:05:18.043 12:09:52 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:18.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.043 --rc genhtml_branch_coverage=1 00:05:18.043 --rc genhtml_function_coverage=1 00:05:18.043 --rc genhtml_legend=1 00:05:18.043 --rc geninfo_all_blocks=1 00:05:18.043 --rc geninfo_unexecuted_blocks=1 00:05:18.043 00:05:18.043 ' 00:05:18.043 12:09:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:18.043 12:09:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1420268 00:05:18.043 12:09:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1420268 00:05:18.043 12:09:52 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1420268 ']' 00:05:18.043 12:09:52 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:18.043 12:09:52 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.043 12:09:52 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.043 12:09:52 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.043 12:09:52 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.043 12:09:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:18.043 [2024-11-04 12:09:52.414516] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:05:18.043 [2024-11-04 12:09:52.414585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1420268 ] 00:05:18.043 [2024-11-04 12:09:52.478719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.043 [2024-11-04 12:09:52.522503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.984 12:09:53 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.984 12:09:53 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:18.984 12:09:53 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:18.984 { 00:05:18.984 "version": "SPDK v25.01-pre git sha1 c3ade7c9c", 00:05:18.984 "fields": { 00:05:18.984 "major": 25, 00:05:18.984 "minor": 1, 00:05:18.984 "patch": 0, 00:05:18.984 "suffix": "-pre", 00:05:18.984 "commit": "c3ade7c9c" 00:05:18.984 } 00:05:18.984 } 00:05:18.984 12:09:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:18.984 12:09:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:18.984 12:09:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:18.984 12:09:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:18.984 12:09:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:18.984 12:09:53 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.984 12:09:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:18.984 12:09:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:18.984 12:09:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:18.984 12:09:53 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.984 12:09:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:18.984 12:09:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:18.984 12:09:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:18.984 12:09:53 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:18.984 12:09:53 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:18.984 12:09:53 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:18.984 12:09:53 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.984 12:09:53 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:18.984 12:09:53 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.984 12:09:53 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:18.984 12:09:53 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.984 12:09:53 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:18.984 12:09:53 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:18.984 12:09:53 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:19.245 request: 00:05:19.245 { 00:05:19.245 "method": "env_dpdk_get_mem_stats", 00:05:19.245 "req_id": 1 00:05:19.245 } 00:05:19.245 Got JSON-RPC error response 00:05:19.245 response: 00:05:19.245 { 00:05:19.245 "code": -32601, 00:05:19.245 "message": "Method not found" 00:05:19.245 } 00:05:19.245 12:09:53 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:19.245 12:09:53 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:19.245 12:09:53 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:19.245 12:09:53 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:19.245 12:09:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1420268 00:05:19.245 12:09:53 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1420268 ']' 00:05:19.245 12:09:53 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1420268 00:05:19.245 12:09:53 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:19.245 12:09:53 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.245 12:09:53 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1420268 00:05:19.245 12:09:53 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.245 12:09:53 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.245 12:09:53 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1420268' 00:05:19.245 killing process with pid 1420268 00:05:19.245 12:09:53 app_cmdline -- common/autotest_common.sh@969 -- # kill 1420268 00:05:19.245 12:09:53 app_cmdline -- common/autotest_common.sh@974 -- # wait 1420268 00:05:19.505 00:05:19.505 real 0m1.711s 00:05:19.505 user 0m2.038s 00:05:19.505 sys 0m0.460s 00:05:19.505 12:09:53 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.505 12:09:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:19.505 ************************************ 00:05:19.505 END TEST app_cmdline 00:05:19.505 ************************************ 00:05:19.505 12:09:53 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:19.505 12:09:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.505 12:09:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.505 12:09:53 -- common/autotest_common.sh@10 -- # set +x 00:05:19.505 ************************************ 00:05:19.505 START TEST version 00:05:19.505 ************************************ 00:05:19.505 12:09:53 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:19.505 * Looking for test storage... 00:05:19.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:19.505 12:09:54 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:19.505 12:09:54 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:19.505 12:09:54 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:19.766 12:09:54 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:19.766 12:09:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.766 12:09:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.766 12:09:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.766 12:09:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.766 12:09:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.766 12:09:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.766 12:09:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.766 12:09:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.766 12:09:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.766 12:09:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.766 12:09:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.766 12:09:54 version -- scripts/common.sh@344 -- # case "$op" in 00:05:19.766 12:09:54 version -- scripts/common.sh@345 -- # : 1 00:05:19.766 12:09:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.766 12:09:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.766 12:09:54 version -- scripts/common.sh@365 -- # decimal 1 00:05:19.766 12:09:54 version -- scripts/common.sh@353 -- # local d=1 00:05:19.766 12:09:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.766 12:09:54 version -- scripts/common.sh@355 -- # echo 1 00:05:19.766 12:09:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.766 12:09:54 version -- scripts/common.sh@366 -- # decimal 2 00:05:19.766 12:09:54 version -- scripts/common.sh@353 -- # local d=2 00:05:19.766 12:09:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.766 12:09:54 version -- scripts/common.sh@355 -- # echo 2 00:05:19.766 12:09:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.766 12:09:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.766 12:09:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.766 12:09:54 version -- scripts/common.sh@368 -- # return 0 00:05:19.766 12:09:54 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.766 12:09:54 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:19.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.766 --rc genhtml_branch_coverage=1 00:05:19.766 --rc genhtml_function_coverage=1 00:05:19.766 --rc genhtml_legend=1 00:05:19.766 --rc geninfo_all_blocks=1 00:05:19.766 --rc geninfo_unexecuted_blocks=1 00:05:19.766 00:05:19.766 ' 00:05:19.766 12:09:54 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:19.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.766 --rc genhtml_branch_coverage=1 00:05:19.766 --rc genhtml_function_coverage=1 00:05:19.766 --rc genhtml_legend=1 00:05:19.766 --rc geninfo_all_blocks=1 00:05:19.766 --rc geninfo_unexecuted_blocks=1 00:05:19.766 00:05:19.766 ' 00:05:19.766 12:09:54 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:19.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.766 --rc genhtml_branch_coverage=1 00:05:19.766 --rc genhtml_function_coverage=1 00:05:19.766 --rc genhtml_legend=1 00:05:19.766 --rc geninfo_all_blocks=1 00:05:19.766 --rc geninfo_unexecuted_blocks=1 00:05:19.766 00:05:19.766 ' 00:05:19.766 12:09:54 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:19.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.766 --rc genhtml_branch_coverage=1 00:05:19.766 --rc genhtml_function_coverage=1 00:05:19.766 --rc genhtml_legend=1 00:05:19.766 --rc geninfo_all_blocks=1 00:05:19.766 --rc geninfo_unexecuted_blocks=1 00:05:19.766 00:05:19.766 ' 00:05:19.766 12:09:54 version -- app/version.sh@17 -- # get_header_version major 00:05:19.766 12:09:54 version -- app/version.sh@14 -- # cut -f2 00:05:19.766 12:09:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:19.766 12:09:54 version -- app/version.sh@14 -- # tr -d '"' 00:05:19.766 12:09:54 version -- app/version.sh@17 -- # major=25 00:05:19.766 12:09:54 version -- app/version.sh@18 -- # get_header_version minor 00:05:19.766 12:09:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:19.766 12:09:54 version -- app/version.sh@14 -- # cut -f2 00:05:19.766 12:09:54 version -- app/version.sh@14 -- # tr -d '"' 00:05:19.766 12:09:54 version -- app/version.sh@18 -- # minor=1 00:05:19.766 12:09:54 version -- app/version.sh@19 -- # get_header_version patch 00:05:19.766 12:09:54 version -- app/version.sh@14 -- # cut -f2 00:05:19.766 12:09:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:19.766 12:09:54 version -- app/version.sh@14 -- # tr -d '"' 00:05:19.766 12:09:54 version -- app/version.sh@19 -- # patch=0 00:05:19.766 12:09:54 version -- app/version.sh@20 -- # get_header_version suffix 00:05:19.766 12:09:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:19.766 12:09:54 version -- app/version.sh@14 -- # tr -d '"' 00:05:19.766 12:09:54 version -- app/version.sh@14 -- # cut -f2 00:05:19.766 12:09:54 version -- app/version.sh@20 -- # suffix=-pre 00:05:19.766 12:09:54 version -- app/version.sh@22 -- # version=25.1 00:05:19.766 12:09:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:19.766 12:09:54 version -- app/version.sh@28 -- # version=25.1rc0 00:05:19.766 12:09:54 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:19.766 12:09:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:19.766 12:09:54 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:19.766 12:09:54 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:19.766 00:05:19.766 real 0m0.266s 00:05:19.766 user 0m0.159s 00:05:19.766 sys 0m0.143s 00:05:19.766 12:09:54 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.766 12:09:54 version -- common/autotest_common.sh@10 -- # set +x 00:05:19.766 ************************************ 00:05:19.766 END TEST version 00:05:19.766 ************************************ 00:05:19.766 12:09:54 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:19.766 12:09:54 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:19.766 12:09:54 -- spdk/autotest.sh@194 -- # uname -s 00:05:19.766 12:09:54 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:19.767 12:09:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:19.767 12:09:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:19.767 12:09:54 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:19.767 12:09:54 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:19.767 12:09:54 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:19.767 12:09:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:19.767 12:09:54 -- common/autotest_common.sh@10 -- # set +x 00:05:19.767 12:09:54 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:19.767 12:09:54 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:19.767 12:09:54 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:19.767 12:09:54 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:19.767 12:09:54 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:19.767 12:09:54 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:19.767 12:09:54 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:19.767 12:09:54 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:19.767 12:09:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.767 12:09:54 -- common/autotest_common.sh@10 -- # set +x 00:05:19.767 ************************************ 00:05:19.767 START TEST nvmf_tcp 00:05:19.767 ************************************ 00:05:19.767 12:09:54 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:20.027 * Looking for test storage... 00:05:20.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:20.027 12:09:54 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:20.027 12:09:54 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:20.027 12:09:54 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:20.027 12:09:54 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.028 12:09:54 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:20.028 12:09:54 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.028 12:09:54 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:20.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.028 --rc genhtml_branch_coverage=1 00:05:20.028 --rc genhtml_function_coverage=1 00:05:20.028 --rc genhtml_legend=1 00:05:20.028 --rc geninfo_all_blocks=1 00:05:20.028 --rc geninfo_unexecuted_blocks=1 00:05:20.028 00:05:20.028 ' 00:05:20.028 12:09:54 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:20.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.028 --rc genhtml_branch_coverage=1 00:05:20.028 --rc genhtml_function_coverage=1 00:05:20.028 --rc genhtml_legend=1 00:05:20.028 --rc geninfo_all_blocks=1 00:05:20.028 --rc geninfo_unexecuted_blocks=1 00:05:20.028 00:05:20.028 ' 00:05:20.028 12:09:54 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:20.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.028 --rc genhtml_branch_coverage=1 00:05:20.028 --rc genhtml_function_coverage=1 00:05:20.028 --rc genhtml_legend=1 00:05:20.028 --rc geninfo_all_blocks=1 00:05:20.028 --rc geninfo_unexecuted_blocks=1 00:05:20.028 00:05:20.028 ' 00:05:20.028 12:09:54 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:20.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.028 --rc genhtml_branch_coverage=1 00:05:20.028 --rc genhtml_function_coverage=1 00:05:20.028 --rc genhtml_legend=1 00:05:20.028 --rc geninfo_all_blocks=1 00:05:20.028 --rc geninfo_unexecuted_blocks=1 00:05:20.028 00:05:20.028 ' 00:05:20.028 12:09:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:20.028 12:09:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:20.028 12:09:54 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:20.028 12:09:54 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:20.028 12:09:54 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.028 12:09:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.028 ************************************ 00:05:20.028 START TEST nvmf_target_core 00:05:20.028 ************************************ 00:05:20.028 12:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:20.290 * Looking for test storage... 00:05:20.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:20.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.290 --rc genhtml_branch_coverage=1 00:05:20.290 --rc genhtml_function_coverage=1 00:05:20.290 --rc genhtml_legend=1 00:05:20.290 --rc geninfo_all_blocks=1 00:05:20.290 --rc geninfo_unexecuted_blocks=1 00:05:20.290 00:05:20.290 ' 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:20.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.290 --rc genhtml_branch_coverage=1 00:05:20.290 --rc genhtml_function_coverage=1 00:05:20.290 --rc genhtml_legend=1 00:05:20.290 --rc geninfo_all_blocks=1 00:05:20.290 --rc geninfo_unexecuted_blocks=1 00:05:20.290 00:05:20.290 ' 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:20.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.290 --rc genhtml_branch_coverage=1 00:05:20.290 --rc genhtml_function_coverage=1 00:05:20.290 --rc genhtml_legend=1 00:05:20.290 --rc geninfo_all_blocks=1 00:05:20.290 --rc geninfo_unexecuted_blocks=1 00:05:20.290 00:05:20.290 ' 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:20.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.290 --rc genhtml_branch_coverage=1 00:05:20.290 --rc genhtml_function_coverage=1 00:05:20.290 --rc genhtml_legend=1 00:05:20.290 --rc geninfo_all_blocks=1 00:05:20.290 --rc geninfo_unexecuted_blocks=1 00:05:20.290 00:05:20.290 ' 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:20.290 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:20.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.291 12:09:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:20.553 ************************************ 00:05:20.553 START TEST nvmf_abort 00:05:20.553 ************************************ 00:05:20.553 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:20.553 * Looking for test storage... 00:05:20.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:20.553 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:20.553 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:20.553 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.553 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:20.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.554 --rc genhtml_branch_coverage=1 00:05:20.554 --rc genhtml_function_coverage=1 00:05:20.554 --rc genhtml_legend=1 00:05:20.554 --rc geninfo_all_blocks=1 00:05:20.554 --rc geninfo_unexecuted_blocks=1 00:05:20.554 00:05:20.554 ' 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:20.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.554 --rc genhtml_branch_coverage=1 00:05:20.554 --rc genhtml_function_coverage=1 00:05:20.554 --rc genhtml_legend=1 00:05:20.554 --rc geninfo_all_blocks=1 00:05:20.554 --rc geninfo_unexecuted_blocks=1 00:05:20.554 00:05:20.554 ' 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:20.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.554 --rc genhtml_branch_coverage=1 00:05:20.554 --rc genhtml_function_coverage=1 00:05:20.554 --rc genhtml_legend=1 00:05:20.554 --rc geninfo_all_blocks=1 00:05:20.554 --rc geninfo_unexecuted_blocks=1 00:05:20.554 00:05:20.554 ' 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:20.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.554 --rc genhtml_branch_coverage=1 00:05:20.554 --rc genhtml_function_coverage=1 00:05:20.554 --rc genhtml_legend=1 00:05:20.554 --rc geninfo_all_blocks=1 00:05:20.554 --rc geninfo_unexecuted_blocks=1 00:05:20.554 00:05:20.554 ' 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:20.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:20.554 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:28.698 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:28.698 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:28.698 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:28.698 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:28.698 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:28.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:28.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:05:28.698 00:05:28.698 --- 10.0.0.2 ping statistics --- 00:05:28.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.699 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:28.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:28.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:05:28.699 00:05:28.699 --- 10.0.0.1 ping statistics --- 00:05:28.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.699 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1424752 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1424752 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1424752 ']' 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.699 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.699 [2024-11-04 12:10:02.425034] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:05:28.699 [2024-11-04 12:10:02.425086] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:28.699 [2024-11-04 12:10:02.511476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.699 [2024-11-04 12:10:02.553385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:28.699 [2024-11-04 12:10:02.553424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:28.699 [2024-11-04 12:10:02.553432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:28.699 [2024-11-04 12:10:02.553439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:28.699 [2024-11-04 12:10:02.553445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:28.699 [2024-11-04 12:10:02.555055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.699 [2024-11-04 12:10:02.555216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.699 [2024-11-04 12:10:02.555216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.699 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.699 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:05:28.699 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:28.699 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:28.699 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.699 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:28.699 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:28.699 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.699 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.699 [2024-11-04 12:10:03.257671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:28.699 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.699 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:28.699 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.699 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.960 Malloc0 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.960 Delay0 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.960 [2024-11-04 12:10:03.337848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.960 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:28.960 [2024-11-04 12:10:03.415870] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:31.507 Initializing NVMe Controllers 00:05:31.507 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:31.507 controller IO queue size 128 less than required 00:05:31.507 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:31.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:31.507 Initialization complete. Launching workers. 00:05:31.507 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28797 00:05:31.507 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28858, failed to submit 62 00:05:31.507 success 28801, unsuccessful 57, failed 0 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:31.507 rmmod nvme_tcp 00:05:31.507 rmmod nvme_fabrics 00:05:31.507 rmmod nvme_keyring 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1424752 ']' 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1424752 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1424752 ']' 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1424752 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1424752 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1424752' 00:05:31.507 killing process with pid 1424752 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1424752 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1424752 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:31.507 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:33.421 12:10:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:33.421 00:05:33.421 real 0m13.020s 00:05:33.421 user 0m13.637s 00:05:33.421 sys 0m6.325s 00:05:33.421 12:10:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.421 12:10:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:33.422 ************************************ 00:05:33.422 END TEST nvmf_abort 00:05:33.422 ************************************ 00:05:33.422 12:10:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:33.422 12:10:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:33.422 12:10:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.422 12:10:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:33.422 ************************************ 00:05:33.422 START TEST nvmf_ns_hotplug_stress 00:05:33.422 ************************************ 00:05:33.422 12:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:33.684 * Looking for test storage... 00:05:33.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:33.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.684 --rc genhtml_branch_coverage=1 00:05:33.684 --rc genhtml_function_coverage=1 00:05:33.684 --rc genhtml_legend=1 00:05:33.684 --rc geninfo_all_blocks=1 00:05:33.684 --rc geninfo_unexecuted_blocks=1 00:05:33.684 00:05:33.684 ' 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:33.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.684 --rc genhtml_branch_coverage=1 00:05:33.684 --rc genhtml_function_coverage=1 00:05:33.684 --rc genhtml_legend=1 00:05:33.684 --rc geninfo_all_blocks=1 00:05:33.684 --rc geninfo_unexecuted_blocks=1 00:05:33.684 00:05:33.684 ' 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:33.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.684 --rc genhtml_branch_coverage=1 00:05:33.684 --rc genhtml_function_coverage=1 00:05:33.684 --rc genhtml_legend=1 00:05:33.684 --rc geninfo_all_blocks=1 00:05:33.684 --rc geninfo_unexecuted_blocks=1 00:05:33.684 00:05:33.684 ' 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:33.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.684 --rc genhtml_branch_coverage=1 00:05:33.684 --rc genhtml_function_coverage=1 00:05:33.684 --rc genhtml_legend=1 00:05:33.684 --rc geninfo_all_blocks=1 00:05:33.684 --rc geninfo_unexecuted_blocks=1 00:05:33.684 00:05:33.684 ' 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.684 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:33.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:33.685 12:10:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:41.828 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:41.828 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:41.828 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:41.828 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:41.829 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:41.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:41.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:05:41.829 00:05:41.829 --- 10.0.0.2 ping statistics --- 00:05:41.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:41.829 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:41.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:41.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:05:41.829 00:05:41.829 --- 10.0.0.1 ping statistics --- 00:05:41.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:41.829 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1429667 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1429667 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1429667 ']' 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.829 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:41.829 [2024-11-04 12:10:15.495444] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:05:41.829 [2024-11-04 12:10:15.495508] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:41.829 [2024-11-04 12:10:15.583818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.829 [2024-11-04 12:10:15.619237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:41.829 [2024-11-04 12:10:15.619274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:41.829 [2024-11-04 12:10:15.619282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:41.829 [2024-11-04 12:10:15.619289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:41.829 [2024-11-04 12:10:15.619295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:41.829 [2024-11-04 12:10:15.620831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.829 [2024-11-04 12:10:15.621115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.829 [2024-11-04 12:10:15.621115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.829 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.829 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:05:41.829 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:41.829 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:41.829 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:41.829 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:41.829 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:41.829 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:42.091 [2024-11-04 12:10:16.474450] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.091 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:42.351 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:42.351 [2024-11-04 12:10:16.839943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:42.352 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:42.612 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:42.873 Malloc0 00:05:42.873 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:42.873 Delay0 00:05:42.873 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.134 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:43.395 NULL1 00:05:43.395 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:43.395 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1430167 00:05:43.395 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:43.395 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:43.395 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.656 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.917 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:43.917 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:43.917 true 00:05:44.178 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:44.178 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.178 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.439 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:44.439 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:44.699 true 00:05:44.699 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:44.699 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.699 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.959 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:44.959 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:45.227 true 00:05:45.227 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:45.227 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.493 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.493 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:45.493 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:45.754 true 00:05:45.754 12:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:45.754 12:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.015 12:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.015 12:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:46.015 12:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:46.276 true 00:05:46.276 12:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:46.276 12:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.537 12:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.537 12:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:46.537 12:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:46.799 true 00:05:46.799 12:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:46.799 12:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.059 12:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.320 12:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:47.320 12:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:47.320 true 00:05:47.320 12:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:47.320 12:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.582 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.842 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:47.842 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:47.842 true 00:05:47.842 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:47.842 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.102 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.362 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:48.362 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:48.362 true 00:05:48.623 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:48.623 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.623 12:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.884 12:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:48.884 12:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:49.145 true 00:05:49.145 12:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:49.145 12:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.145 12:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.405 12:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:49.405 12:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:49.666 true 00:05:49.666 12:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:49.666 12:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.666 12:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.926 12:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:49.926 12:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:50.192 true 00:05:50.192 12:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:50.192 12:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.493 12:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.493 12:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:50.493 12:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:50.776 true 00:05:50.776 12:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:50.776 12:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.776 12:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.041 12:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:51.041 12:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:51.300 true 00:05:51.300 12:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:51.300 12:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.560 12:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.560 12:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:51.560 12:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:51.820 true 00:05:51.820 12:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:51.820 12:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.080 12:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.080 12:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:52.080 12:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:52.341 true 00:05:52.341 12:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:52.341 12:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.602 12:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.863 12:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:52.863 12:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:52.863 true 00:05:52.863 12:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:52.863 12:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.123 12:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.383 12:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:53.383 12:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:53.383 true 00:05:53.383 12:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:53.383 12:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.644 12:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.904 12:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:53.905 12:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:53.905 true 00:05:53.905 12:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:53.905 12:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.165 12:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.426 12:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:54.426 12:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:54.426 true 00:05:54.686 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:54.686 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.686 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.947 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:54.947 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:55.208 true 00:05:55.208 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:55.208 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.208 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.467 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:55.467 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:55.728 true 00:05:55.728 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:55.728 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.990 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.990 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:55.990 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:56.250 true 00:05:56.250 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:56.250 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.509 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.509 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:56.509 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:56.769 true 00:05:56.769 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:56.769 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.029 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.289 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:57.289 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:57.289 true 00:05:57.289 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:57.289 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.550 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.809 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:57.809 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:57.809 true 00:05:57.809 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:57.809 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.069 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.392 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:58.393 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:58.393 true 00:05:58.393 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:58.393 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.652 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.912 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:58.912 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:58.912 true 00:05:58.912 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:58.912 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.172 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.433 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:59.433 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:59.433 true 00:05:59.433 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:05:59.433 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.693 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.954 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:59.954 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:00.215 true 00:06:00.215 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:00.215 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.215 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.474 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:00.475 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:00.735 true 00:06:00.735 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:00.735 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.735 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.995 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:00.995 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:01.256 true 00:06:01.256 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:01.256 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.516 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.516 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:01.516 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:01.776 true 00:06:01.776 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:01.776 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.035 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.035 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:02.035 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:02.295 true 00:06:02.295 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:02.295 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.555 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.820 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:02.820 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:02.820 true 00:06:02.820 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:02.820 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.083 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.343 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:03.343 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:03.343 true 00:06:03.343 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:03.343 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.603 12:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.863 12:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:03.863 12:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:03.863 true 00:06:04.123 12:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:04.123 12:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.123 12:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.383 12:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:04.383 12:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:04.643 true 00:06:04.643 12:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:04.643 12:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.643 12:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.903 12:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:04.903 12:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:05.162 true 00:06:05.162 12:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:05.162 12:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.423 12:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.423 12:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:05.423 12:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:05.682 true 00:06:05.682 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:05.682 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.943 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.943 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:05.943 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:06.203 true 00:06:06.203 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:06.203 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.463 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.723 12:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:06.723 12:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:06.723 true 00:06:06.723 12:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:06.723 12:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.982 12:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.242 12:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:07.242 12:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:07.242 true 00:06:07.242 12:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:07.242 12:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.503 12:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.763 12:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:07.763 12:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:07.763 true 00:06:08.024 12:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:08.024 12:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.024 12:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.285 12:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:08.285 12:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:08.544 true 00:06:08.544 12:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:08.544 12:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.544 12:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.804 12:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:08.804 12:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:09.065 true 00:06:09.065 12:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:09.065 12:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.326 12:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.326 12:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:09.326 12:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:09.587 true 00:06:09.587 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:09.587 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.847 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.847 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:09.847 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:10.107 true 00:06:10.107 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:10.107 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.369 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.369 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:10.629 12:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:10.629 true 00:06:10.629 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:10.629 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.889 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.149 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:06:11.149 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:06:11.149 true 00:06:11.149 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:11.149 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.409 12:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.668 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:06:11.668 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:11.668 true 00:06:11.668 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:11.668 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.927 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.187 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:06:12.187 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:06:12.447 true 00:06:12.447 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:12.447 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.447 12:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.707 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:06:12.707 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:06:12.968 true 00:06:12.968 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:12.968 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.968 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.228 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:06:13.228 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:06:13.548 true 00:06:13.548 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:13.548 12:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.548 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.807 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:06:13.807 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:06:13.807 Initializing NVMe Controllers 00:06:13.807 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:13.807 Controller IO queue size 128, less than required. 00:06:13.807 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:13.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:13.807 Initialization complete. Launching workers. 00:06:13.807 ======================================================== 00:06:13.807 Latency(us) 00:06:13.807 Device Information : IOPS MiB/s Average min max 00:06:13.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30502.03 14.89 4196.20 1580.12 10303.66 00:06:13.807 ======================================================== 00:06:13.807 Total : 30502.03 14.89 4196.20 1580.12 10303.66 00:06:13.807 00:06:14.067 true 00:06:14.067 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1430167 00:06:14.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1430167) - No such process 00:06:14.067 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1430167 00:06:14.067 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.067 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.327 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:14.327 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:14.327 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:14.327 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:14.327 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:14.587 null0 00:06:14.587 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:14.587 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:14.587 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:14.587 null1 00:06:14.846 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:14.846 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:14.846 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:14.846 null2 00:06:14.846 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:14.846 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:14.846 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:15.107 null3 00:06:15.107 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:15.107 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:15.107 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:15.107 null4 00:06:15.368 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:15.368 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:15.368 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:15.368 null5 00:06:15.368 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:15.368 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:15.368 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:15.628 null6 00:06:15.628 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:15.628 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:15.628 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:15.891 null7 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:15.891 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:15.892 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:15.892 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:15.892 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.892 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:15.892 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:15.892 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:15.892 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:15.892 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1436738 1436739 1436741 1436743 1436745 1436747 1436749 1436751 00:06:15.892 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:15.892 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:15.892 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:15.892 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.892 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:15.892 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.892 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:15.892 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.155 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:16.416 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.416 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:16.416 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:16.416 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:16.416 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:16.416 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:16.416 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:16.416 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:16.416 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.416 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.416 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:16.416 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.416 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.416 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:16.677 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.677 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.677 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:16.677 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.677 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.677 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:16.677 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.677 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.677 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:16.677 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.677 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.677 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.677 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:16.677 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.677 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.677 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:16.677 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.677 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.678 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:16.678 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:16.678 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:16.678 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.939 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:16.940 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.940 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.940 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:16.940 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:16.940 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.940 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.940 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:16.940 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.201 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:17.201 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:17.201 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:17.201 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:17.201 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.201 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.201 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:17.201 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.201 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.201 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:17.201 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:17.201 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:17.201 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.201 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.201 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.462 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:17.462 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.723 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:17.984 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:18.244 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:18.244 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.245 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.245 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:18.245 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:18.245 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.245 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.245 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:18.245 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:18.245 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:18.245 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.245 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.245 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:18.245 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.245 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.245 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:18.505 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.505 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.505 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:18.505 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:18.505 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.505 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.505 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:18.505 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.505 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.505 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:18.505 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:18.505 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:18.505 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.505 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.505 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:18.505 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.505 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:18.505 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.505 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.505 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:18.505 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.505 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.505 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.765 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.025 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.025 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.025 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.025 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.025 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.025 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.025 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.025 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.025 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:19.025 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.025 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.025 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:19.025 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.025 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.025 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.025 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.025 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.025 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.025 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.286 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.547 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.547 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.547 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.547 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.547 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.547 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.547 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.807 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.807 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.807 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.807 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.807 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:19.807 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:19.807 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:19.807 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:19.807 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:19.807 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:19.807 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:19.808 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:19.808 rmmod nvme_tcp 00:06:19.808 rmmod nvme_fabrics 00:06:19.808 rmmod nvme_keyring 00:06:19.808 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:19.808 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:19.808 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:19.808 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1429667 ']' 00:06:19.808 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1429667 00:06:19.808 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1429667 ']' 00:06:19.808 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1429667 00:06:19.808 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:19.808 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.808 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1429667 00:06:19.808 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:19.808 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:19.808 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1429667' 00:06:19.808 killing process with pid 1429667 00:06:19.808 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1429667 00:06:19.808 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1429667 00:06:20.068 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:20.068 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:20.068 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:20.068 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:20.068 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:20.068 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:06:20.068 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:06:20.068 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:20.068 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:20.068 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.068 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.068 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:21.979 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:21.979 00:06:21.979 real 0m48.558s 00:06:21.979 user 3m19.466s 00:06:21.979 sys 0m16.843s 00:06:21.979 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.979 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:21.979 ************************************ 00:06:21.979 END TEST nvmf_ns_hotplug_stress 00:06:21.979 ************************************ 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:22.239 ************************************ 00:06:22.239 START TEST nvmf_delete_subsystem 00:06:22.239 ************************************ 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:22.239 * Looking for test storage... 00:06:22.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:22.239 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:22.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.240 --rc genhtml_branch_coverage=1 00:06:22.240 --rc genhtml_function_coverage=1 00:06:22.240 --rc genhtml_legend=1 00:06:22.240 --rc geninfo_all_blocks=1 00:06:22.240 --rc geninfo_unexecuted_blocks=1 00:06:22.240 00:06:22.240 ' 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:22.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.240 --rc genhtml_branch_coverage=1 00:06:22.240 --rc genhtml_function_coverage=1 00:06:22.240 --rc genhtml_legend=1 00:06:22.240 --rc geninfo_all_blocks=1 00:06:22.240 --rc geninfo_unexecuted_blocks=1 00:06:22.240 00:06:22.240 ' 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:22.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.240 --rc genhtml_branch_coverage=1 00:06:22.240 --rc genhtml_function_coverage=1 00:06:22.240 --rc genhtml_legend=1 00:06:22.240 --rc geninfo_all_blocks=1 00:06:22.240 --rc geninfo_unexecuted_blocks=1 00:06:22.240 00:06:22.240 ' 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:22.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.240 --rc genhtml_branch_coverage=1 00:06:22.240 --rc genhtml_function_coverage=1 00:06:22.240 --rc genhtml_legend=1 00:06:22.240 --rc geninfo_all_blocks=1 00:06:22.240 --rc geninfo_unexecuted_blocks=1 00:06:22.240 00:06:22.240 ' 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:22.240 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:22.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:22.503 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:30.639 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:30.640 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:30.640 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:30.640 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:30.640 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:30.640 12:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:30.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:30.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:06:30.640 00:06:30.640 --- 10.0.0.2 ping statistics --- 00:06:30.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.640 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:30.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:30.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:06:30.640 00:06:30.640 --- 10.0.0.1 ping statistics --- 00:06:30.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.640 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1442069 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1442069 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1442069 ']' 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.640 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.641 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.641 12:11:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.641 [2024-11-04 12:11:04.204710] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:06:30.641 [2024-11-04 12:11:04.204787] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:30.641 [2024-11-04 12:11:04.278181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.641 [2024-11-04 12:11:04.319943] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:30.641 [2024-11-04 12:11:04.319982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:30.641 [2024-11-04 12:11:04.319990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:30.641 [2024-11-04 12:11:04.319997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:30.641 [2024-11-04 12:11:04.320003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:30.641 [2024-11-04 12:11:04.321257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.641 [2024-11-04 12:11:04.321259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.641 [2024-11-04 12:11:05.063327] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.641 [2024-11-04 12:11:05.087501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.641 NULL1 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.641 Delay0 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1442254 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:30.641 12:11:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:30.641 [2024-11-04 12:11:05.184286] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:33.184 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:33.184 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.184 12:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.184 Write completed with error (sct=0, sc=8) 00:06:33.184 Write completed with error (sct=0, sc=8) 00:06:33.184 Read completed with error (sct=0, sc=8) 00:06:33.184 starting I/O failed: -6 00:06:33.184 Write completed with error (sct=0, sc=8) 00:06:33.184 Read completed with error (sct=0, sc=8) 00:06:33.184 Read completed with error (sct=0, sc=8) 00:06:33.184 Read completed with error (sct=0, sc=8) 00:06:33.184 starting I/O failed: -6 00:06:33.184 Read completed with error (sct=0, sc=8) 00:06:33.184 Write completed with error (sct=0, sc=8) 00:06:33.184 Read completed with error (sct=0, sc=8) 00:06:33.184 Read completed with error (sct=0, sc=8) 00:06:33.184 starting I/O failed: -6 00:06:33.184 Read completed with error (sct=0, sc=8) 00:06:33.184 Read completed with error (sct=0, sc=8) 00:06:33.184 Read completed with error (sct=0, sc=8) 00:06:33.184 Read completed with error (sct=0, sc=8) 00:06:33.184 starting I/O failed: -6 00:06:33.184 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 [2024-11-04 12:11:07.389679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2106750 is same with the state(6) to be set 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Write completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 Read completed with error (sct=0, sc=8) 00:06:33.185 starting I/O failed: -6 00:06:33.186 Write completed with error (sct=0, sc=8) 00:06:33.186 Write completed with error (sct=0, sc=8) 00:06:33.186 starting I/O failed: -6 00:06:33.186 Read completed with error (sct=0, sc=8) 00:06:33.186 Read completed with error (sct=0, sc=8) 00:06:33.186 starting I/O failed: -6 00:06:33.186 Read completed with error (sct=0, sc=8) 00:06:33.186 Read completed with error (sct=0, sc=8) 00:06:33.186 starting I/O failed: -6 00:06:33.186 Write completed with error (sct=0, sc=8) 00:06:33.186 Read completed with error (sct=0, sc=8) 00:06:33.186 starting I/O failed: -6 00:06:33.186 Write completed with error (sct=0, sc=8) 00:06:33.186 Read completed with error (sct=0, sc=8) 00:06:33.186 starting I/O failed: -6 00:06:33.186 Read completed with error (sct=0, sc=8) 00:06:33.186 Read completed with error (sct=0, sc=8) 00:06:33.186 starting I/O failed: -6 00:06:33.186 Write completed with error (sct=0, sc=8) 00:06:33.186 Read completed with error (sct=0, sc=8) 00:06:33.186 starting I/O failed: -6 00:06:33.186 Read completed with error (sct=0, sc=8) 00:06:33.186 Write completed with error (sct=0, sc=8) 00:06:33.186 starting I/O failed: -6 00:06:33.186 Read completed with error (sct=0, sc=8) 00:06:33.186 Write completed with error (sct=0, sc=8) 00:06:33.186 starting I/O failed: -6 00:06:33.186 Write completed with error (sct=0, sc=8) 00:06:33.186 Write completed with error (sct=0, sc=8) 00:06:33.186 starting I/O failed: -6 00:06:33.186 Read completed with error (sct=0, sc=8) 00:06:33.186 Write completed with error (sct=0, sc=8) 00:06:33.186 starting I/O failed: -6 00:06:33.186 Read completed with error (sct=0, sc=8) 00:06:33.186 starting I/O failed: -6 00:06:33.186 starting I/O failed: -6 00:06:33.186 starting I/O failed: -6 00:06:33.186 starting I/O failed: -6 00:06:33.186 starting I/O failed: -6 00:06:33.186 starting I/O failed: -6 00:06:33.186 starting I/O failed: -6 00:06:33.186 starting I/O failed: -6 00:06:33.186 starting I/O failed: -6 00:06:34.175 [2024-11-04 12:11:08.365774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2107a70 is same with the state(6) to be set 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 [2024-11-04 12:11:08.393617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2106930 is same with the state(6) to be set 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 [2024-11-04 12:11:08.393782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2106570 is same with the state(6) to be set 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 [2024-11-04 12:11:08.396069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f935000d640 is same with the state(6) to be set 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Write completed with error (sct=0, sc=8) 00:06:34.175 Read completed with error (sct=0, sc=8) 00:06:34.176 Write completed with error (sct=0, sc=8) 00:06:34.176 Read completed with error (sct=0, sc=8) 00:06:34.176 Read completed with error (sct=0, sc=8) 00:06:34.176 Read completed with error (sct=0, sc=8) 00:06:34.176 Read completed with error (sct=0, sc=8) 00:06:34.176 Read completed with error (sct=0, sc=8) 00:06:34.176 Read completed with error (sct=0, sc=8) 00:06:34.176 [2024-11-04 12:11:08.396211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f935000cfe0 is same with the state(6) to be set 00:06:34.176 Initializing NVMe Controllers 00:06:34.176 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:34.176 Controller IO queue size 128, less than required. 00:06:34.176 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:34.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:34.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:34.176 Initialization complete. Launching workers. 00:06:34.176 ======================================================== 00:06:34.176 Latency(us) 00:06:34.176 Device Information : IOPS MiB/s Average min max 00:06:34.176 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.78 0.08 888008.46 234.04 1007187.83 00:06:34.176 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 185.23 0.09 909762.41 425.06 1009716.06 00:06:34.176 ======================================================== 00:06:34.176 Total : 358.02 0.17 899263.63 234.04 1009716.06 00:06:34.176 00:06:34.176 [2024-11-04 12:11:08.396731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2107a70 (9): Bad file descriptor 00:06:34.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:34.176 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.176 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:34.176 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1442254 00:06:34.176 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1442254 00:06:34.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1442254) - No such process 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1442254 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1442254 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1442254 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.502 [2024-11-04 12:11:08.928631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1442954 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1442954 00:06:34.502 12:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:34.502 [2024-11-04 12:11:09.005305] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:35.095 12:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:35.095 12:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1442954 00:06:35.095 12:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:35.665 12:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:35.665 12:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1442954 00:06:35.665 12:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:35.926 12:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:35.926 12:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1442954 00:06:35.926 12:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:36.496 12:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:36.496 12:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1442954 00:06:36.496 12:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:37.066 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:37.066 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1442954 00:06:37.066 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:37.638 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:37.638 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1442954 00:06:37.638 12:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:37.638 Initializing NVMe Controllers 00:06:37.638 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:37.638 Controller IO queue size 128, less than required. 00:06:37.638 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:37.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:37.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:37.638 Initialization complete. Launching workers. 00:06:37.638 ======================================================== 00:06:37.638 Latency(us) 00:06:37.638 Device Information : IOPS MiB/s Average min max 00:06:37.638 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002013.95 1000135.60 1006872.09 00:06:37.638 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002910.34 1000299.96 1040873.04 00:06:37.638 ======================================================== 00:06:37.638 Total : 256.00 0.12 1002462.15 1000135.60 1040873.04 00:06:37.638 00:06:37.638 [2024-11-04 12:11:12.137076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd19d50 is same with the state(6) to be set 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1442954 00:06:38.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1442954) - No such process 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1442954 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:38.209 rmmod nvme_tcp 00:06:38.209 rmmod nvme_fabrics 00:06:38.209 rmmod nvme_keyring 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1442069 ']' 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1442069 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1442069 ']' 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1442069 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:06:38.209 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.210 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1442069 00:06:38.210 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.210 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.210 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1442069' 00:06:38.210 killing process with pid 1442069 00:06:38.210 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1442069 00:06:38.210 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1442069 00:06:38.210 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:38.210 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:38.210 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:38.210 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:38.210 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:06:38.210 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:38.210 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:06:38.210 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:38.210 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:38.210 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.210 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.210 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.754 12:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:40.754 00:06:40.754 real 0m18.230s 00:06:40.754 user 0m30.896s 00:06:40.754 sys 0m6.666s 00:06:40.754 12:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.754 12:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:40.754 ************************************ 00:06:40.754 END TEST nvmf_delete_subsystem 00:06:40.754 ************************************ 00:06:40.754 12:11:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:40.754 12:11:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:40.754 12:11:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.754 12:11:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:40.754 ************************************ 00:06:40.754 START TEST nvmf_host_management 00:06:40.754 ************************************ 00:06:40.754 12:11:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:40.754 * Looking for test storage... 00:06:40.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:40.754 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:40.754 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:40.754 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:40.754 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:40.754 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.754 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:40.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.755 --rc genhtml_branch_coverage=1 00:06:40.755 --rc genhtml_function_coverage=1 00:06:40.755 --rc genhtml_legend=1 00:06:40.755 --rc geninfo_all_blocks=1 00:06:40.755 --rc geninfo_unexecuted_blocks=1 00:06:40.755 00:06:40.755 ' 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:40.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.755 --rc genhtml_branch_coverage=1 00:06:40.755 --rc genhtml_function_coverage=1 00:06:40.755 --rc genhtml_legend=1 00:06:40.755 --rc geninfo_all_blocks=1 00:06:40.755 --rc geninfo_unexecuted_blocks=1 00:06:40.755 00:06:40.755 ' 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:40.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.755 --rc genhtml_branch_coverage=1 00:06:40.755 --rc genhtml_function_coverage=1 00:06:40.755 --rc genhtml_legend=1 00:06:40.755 --rc geninfo_all_blocks=1 00:06:40.755 --rc geninfo_unexecuted_blocks=1 00:06:40.755 00:06:40.755 ' 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:40.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.755 --rc genhtml_branch_coverage=1 00:06:40.755 --rc genhtml_function_coverage=1 00:06:40.755 --rc genhtml_legend=1 00:06:40.755 --rc geninfo_all_blocks=1 00:06:40.755 --rc geninfo_unexecuted_blocks=1 00:06:40.755 00:06:40.755 ' 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:40.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:40.755 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:40.756 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.756 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.756 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.756 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:40.756 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:40.756 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:40.756 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:48.898 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:48.898 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:48.898 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.898 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:48.899 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:48.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:48.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:06:48.899 00:06:48.899 --- 10.0.0.2 ping statistics --- 00:06:48.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.899 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:48.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:48.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:06:48.899 00:06:48.899 --- 10.0.0.1 ping statistics --- 00:06:48.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.899 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1447958 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1447958 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1447958 ']' 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.899 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.899 [2024-11-04 12:11:22.448957] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:06:48.899 [2024-11-04 12:11:22.449024] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.899 [2024-11-04 12:11:22.538176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.899 [2024-11-04 12:11:22.591860] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.899 [2024-11-04 12:11:22.591914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.899 [2024-11-04 12:11:22.591924] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.899 [2024-11-04 12:11:22.591931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.899 [2024-11-04 12:11:22.591937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.899 [2024-11-04 12:11:22.593956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.899 [2024-11-04 12:11:22.594242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.899 [2024-11-04 12:11:22.594407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:48.899 [2024-11-04 12:11:22.594409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.899 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.899 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:48.899 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:48.899 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:48.899 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.899 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:48.899 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:48.899 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.899 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.899 [2024-11-04 12:11:23.306437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.899 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.899 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:48.899 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:48.899 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.899 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.900 Malloc0 00:06:48.900 [2024-11-04 12:11:23.380939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1448330 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1448330 /var/tmp/bdevperf.sock 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1448330 ']' 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:48.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:06:48.900 { 00:06:48.900 "params": { 00:06:48.900 "name": "Nvme$subsystem", 00:06:48.900 "trtype": "$TEST_TRANSPORT", 00:06:48.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:48.900 "adrfam": "ipv4", 00:06:48.900 "trsvcid": "$NVMF_PORT", 00:06:48.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:48.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:48.900 "hdgst": ${hdgst:-false}, 00:06:48.900 "ddgst": ${ddgst:-false} 00:06:48.900 }, 00:06:48.900 "method": "bdev_nvme_attach_controller" 00:06:48.900 } 00:06:48.900 EOF 00:06:48.900 )") 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:06:48.900 12:11:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:06:48.900 "params": { 00:06:48.900 "name": "Nvme0", 00:06:48.900 "trtype": "tcp", 00:06:48.900 "traddr": "10.0.0.2", 00:06:48.900 "adrfam": "ipv4", 00:06:48.900 "trsvcid": "4420", 00:06:48.900 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:48.900 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:48.900 "hdgst": false, 00:06:48.900 "ddgst": false 00:06:48.900 }, 00:06:48.900 "method": "bdev_nvme_attach_controller" 00:06:48.900 }' 00:06:49.161 [2024-11-04 12:11:23.485891] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:06:49.161 [2024-11-04 12:11:23.485944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448330 ] 00:06:49.161 [2024-11-04 12:11:23.546784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.161 [2024-11-04 12:11:23.583088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.161 Running I/O for 10 seconds... 00:06:49.732 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.732 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:49.732 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:49.732 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.732 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:49.995 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.995 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:49.995 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=910 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 910 -ge 100 ']' 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:49.996 [2024-11-04 12:11:24.356023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e1f0 is same with the state(6) to be set 00:06:49.996 [2024-11-04 12:11:24.356075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e1f0 is same with the state(6) to be set 00:06:49.996 [2024-11-04 12:11:24.356083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e1f0 is same with the state(6) to be set 00:06:49.996 [2024-11-04 12:11:24.356090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e1f0 is same with the state(6) to be set 00:06:49.996 [2024-11-04 12:11:24.356097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e1f0 is same with the state(6) to be set 00:06:49.996 [2024-11-04 12:11:24.356110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e1f0 is same with the state(6) to be set 00:06:49.996 [2024-11-04 12:11:24.356118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e1f0 is same with the state(6) to be set 00:06:49.996 [2024-11-04 12:11:24.356125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e1f0 is same with the state(6) to be set 00:06:49.996 [2024-11-04 12:11:24.356132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e1f0 is same with the state(6) to be set 00:06:49.996 [2024-11-04 12:11:24.356138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e1f0 is same with the state(6) to be set 00:06:49.996 [2024-11-04 12:11:24.356145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e1f0 is same with the state(6) to be set 00:06:49.996 [2024-11-04 12:11:24.356151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e1f0 is same with the state(6) to be set 00:06:49.996 [2024-11-04 12:11:24.356158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e1f0 is same with the state(6) to be set 00:06:49.996 [2024-11-04 12:11:24.356164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e1f0 is same with the state(6) to be set 00:06:49.996 [2024-11-04 12:11:24.356172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e1f0 is same with the state(6) to be set 00:06:49.996 [2024-11-04 12:11:24.356179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e1f0 is same with the state(6) to be set 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.996 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:49.996 [2024-11-04 12:11:24.364411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.996 [2024-11-04 12:11:24.364448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.996 [2024-11-04 12:11:24.364467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.996 [2024-11-04 12:11:24.364483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:49.996 [2024-11-04 12:11:24.364499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10930c0 is same with the state(6) to be set 00:06:49.996 [2024-11-04 12:11:24.364579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.996 [2024-11-04 12:11:24.364590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.996 [2024-11-04 12:11:24.364617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.996 [2024-11-04 12:11:24.364635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.996 [2024-11-04 12:11:24.364652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.996 [2024-11-04 12:11:24.364669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.996 [2024-11-04 12:11:24.364686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.996 [2024-11-04 12:11:24.364704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.996 [2024-11-04 12:11:24.364721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.996 [2024-11-04 12:11:24.364738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.996 [2024-11-04 12:11:24.364760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.996 [2024-11-04 12:11:24.364777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.996 [2024-11-04 12:11:24.364794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.996 [2024-11-04 12:11:24.364811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.996 [2024-11-04 12:11:24.364828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.996 [2024-11-04 12:11:24.364848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.996 [2024-11-04 12:11:24.364865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.996 [2024-11-04 12:11:24.364883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.996 [2024-11-04 12:11:24.364900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.996 [2024-11-04 12:11:24.364909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.364916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.364926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.364934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.364943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.364951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.364960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.364968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.364978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.364985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.364995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.997 [2024-11-04 12:11:24.365602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.997 [2024-11-04 12:11:24.365609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.998 [2024-11-04 12:11:24.365619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.998 [2024-11-04 12:11:24.365626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.998 [2024-11-04 12:11:24.365636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.998 [2024-11-04 12:11:24.365643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.998 [2024-11-04 12:11:24.365652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.998 [2024-11-04 12:11:24.365660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.998 [2024-11-04 12:11:24.365669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.998 [2024-11-04 12:11:24.365676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.998 [2024-11-04 12:11:24.365686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.998 [2024-11-04 12:11:24.365693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.998 [2024-11-04 12:11:24.365702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:49.998 [2024-11-04 12:11:24.365710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:49.998 [2024-11-04 12:11:24.365765] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12ac370 was disconnected and freed. reset controller. 00:06:49.998 [2024-11-04 12:11:24.366958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:49.998 task offset: 1408 on job bdev=Nvme0n1 fails 00:06:49.998 00:06:49.998 Latency(us) 00:06:49.998 [2024-11-04T11:11:24.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.998 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:49.998 Job: Nvme0n1 ended in about 0.64 seconds with error 00:06:49.998 Verification LBA range: start 0x0 length 0x400 00:06:49.998 Nvme0n1 : 0.64 1604.42 100.28 100.28 0.00 36702.80 1802.24 33423.36 00:06:49.998 [2024-11-04T11:11:24.568Z] =================================================================================================================== 00:06:49.998 [2024-11-04T11:11:24.568Z] Total : 1604.42 100.28 100.28 0.00 36702.80 1802.24 33423.36 00:06:49.998 [2024-11-04 12:11:24.368944] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:49.998 [2024-11-04 12:11:24.368965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10930c0 (9): Bad file descriptor 00:06:49.998 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.998 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:49.998 [2024-11-04 12:11:24.419824] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:50.939 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1448330 00:06:50.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1448330) - No such process 00:06:50.939 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:50.939 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:50.939 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:50.939 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:50.939 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:06:50.939 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:06:50.939 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:06:50.939 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:06:50.939 { 00:06:50.939 "params": { 00:06:50.939 "name": "Nvme$subsystem", 00:06:50.939 "trtype": "$TEST_TRANSPORT", 00:06:50.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:50.939 "adrfam": "ipv4", 00:06:50.939 "trsvcid": "$NVMF_PORT", 00:06:50.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:50.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:50.939 "hdgst": ${hdgst:-false}, 00:06:50.939 "ddgst": ${ddgst:-false} 00:06:50.939 }, 00:06:50.939 "method": "bdev_nvme_attach_controller" 00:06:50.939 } 00:06:50.939 EOF 00:06:50.939 )") 00:06:50.939 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:06:50.939 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:06:50.939 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:06:50.939 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:06:50.939 "params": { 00:06:50.939 "name": "Nvme0", 00:06:50.939 "trtype": "tcp", 00:06:50.939 "traddr": "10.0.0.2", 00:06:50.939 "adrfam": "ipv4", 00:06:50.939 "trsvcid": "4420", 00:06:50.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:50.939 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:50.939 "hdgst": false, 00:06:50.939 "ddgst": false 00:06:50.939 }, 00:06:50.939 "method": "bdev_nvme_attach_controller" 00:06:50.939 }' 00:06:50.939 [2024-11-04 12:11:25.432410] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:06:50.939 [2024-11-04 12:11:25.432464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448680 ] 00:06:50.939 [2024-11-04 12:11:25.492683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.200 [2024-11-04 12:11:25.527438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.461 Running I/O for 1 seconds... 00:06:52.404 1856.00 IOPS, 116.00 MiB/s 00:06:52.404 Latency(us) 00:06:52.404 [2024-11-04T11:11:26.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:52.404 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:52.404 Verification LBA range: start 0x0 length 0x400 00:06:52.404 Nvme0n1 : 1.01 1894.46 118.40 0.00 0.00 33144.95 5051.73 29491.20 00:06:52.404 [2024-11-04T11:11:26.974Z] =================================================================================================================== 00:06:52.404 [2024-11-04T11:11:26.974Z] Total : 1894.46 118.40 0.00 0.00 33144.95 5051.73 29491.20 00:06:52.404 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:52.404 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:52.404 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:52.404 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:52.665 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:52.665 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:52.665 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:52.665 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:52.665 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:52.665 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:52.665 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:52.665 rmmod nvme_tcp 00:06:52.665 rmmod nvme_fabrics 00:06:52.665 rmmod nvme_keyring 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1447958 ']' 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1447958 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1447958 ']' 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1447958 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1447958 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1447958' 00:06:52.665 killing process with pid 1447958 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1447958 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1447958 00:06:52.665 [2024-11-04 12:11:27.188819] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.665 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:55.211 00:06:55.211 real 0m14.389s 00:06:55.211 user 0m23.166s 00:06:55.211 sys 0m6.507s 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.211 ************************************ 00:06:55.211 END TEST nvmf_host_management 00:06:55.211 ************************************ 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:55.211 ************************************ 00:06:55.211 START TEST nvmf_lvol 00:06:55.211 ************************************ 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:55.211 * Looking for test storage... 00:06:55.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:55.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.211 --rc genhtml_branch_coverage=1 00:06:55.211 --rc genhtml_function_coverage=1 00:06:55.211 --rc genhtml_legend=1 00:06:55.211 --rc geninfo_all_blocks=1 00:06:55.211 --rc geninfo_unexecuted_blocks=1 00:06:55.211 00:06:55.211 ' 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:55.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.211 --rc genhtml_branch_coverage=1 00:06:55.211 --rc genhtml_function_coverage=1 00:06:55.211 --rc genhtml_legend=1 00:06:55.211 --rc geninfo_all_blocks=1 00:06:55.211 --rc geninfo_unexecuted_blocks=1 00:06:55.211 00:06:55.211 ' 00:06:55.211 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:55.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.211 --rc genhtml_branch_coverage=1 00:06:55.211 --rc genhtml_function_coverage=1 00:06:55.211 --rc genhtml_legend=1 00:06:55.211 --rc geninfo_all_blocks=1 00:06:55.211 --rc geninfo_unexecuted_blocks=1 00:06:55.211 00:06:55.212 ' 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:55.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.212 --rc genhtml_branch_coverage=1 00:06:55.212 --rc genhtml_function_coverage=1 00:06:55.212 --rc genhtml_legend=1 00:06:55.212 --rc geninfo_all_blocks=1 00:06:55.212 --rc geninfo_unexecuted_blocks=1 00:06:55.212 00:06:55.212 ' 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:55.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:55.212 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:03.348 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:03.348 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:03.348 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:03.348 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:03.348 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:03.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:03.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.717 ms 00:07:03.349 00:07:03.349 --- 10.0.0.2 ping statistics --- 00:07:03.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.349 rtt min/avg/max/mdev = 0.717/0.717/0.717/0.000 ms 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:03.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:03.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:07:03.349 00:07:03.349 --- 10.0.0.1 ping statistics --- 00:07:03.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.349 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1453261 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1453261 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1453261 ']' 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.349 12:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:03.349 [2024-11-04 12:11:36.869515] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:07:03.349 [2024-11-04 12:11:36.869584] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.349 [2024-11-04 12:11:36.944739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.349 [2024-11-04 12:11:36.987455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:03.349 [2024-11-04 12:11:36.987497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:03.349 [2024-11-04 12:11:36.987505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:03.349 [2024-11-04 12:11:36.987512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:03.349 [2024-11-04 12:11:36.987518] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:03.349 [2024-11-04 12:11:36.988899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.349 [2024-11-04 12:11:36.989023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.349 [2024-11-04 12:11:36.989026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.349 12:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.349 12:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:03.349 12:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:03.349 12:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:03.349 12:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:03.349 12:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:03.349 12:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:03.349 [2024-11-04 12:11:37.872142] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.349 12:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:03.609 12:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:03.609 12:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:03.870 12:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:03.870 12:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:04.129 12:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:04.129 12:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=00c21c85-fe17-4763-a038-a91f07093af0 00:07:04.129 12:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 00c21c85-fe17-4763-a038-a91f07093af0 lvol 20 00:07:04.389 12:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=72644d89-2c8b-4c98-b5f5-513b12245e63 00:07:04.389 12:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:04.649 12:11:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 72644d89-2c8b-4c98-b5f5-513b12245e63 00:07:04.649 12:11:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:04.909 [2024-11-04 12:11:39.337944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.909 12:11:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:05.169 12:11:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:05.169 12:11:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1453764 00:07:05.169 12:11:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:06.108 12:11:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 72644d89-2c8b-4c98-b5f5-513b12245e63 MY_SNAPSHOT 00:07:06.369 12:11:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=152cb753-4842-47e2-b6c4-48cb43595cc1 00:07:06.369 12:11:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 72644d89-2c8b-4c98-b5f5-513b12245e63 30 00:07:06.629 12:11:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 152cb753-4842-47e2-b6c4-48cb43595cc1 MY_CLONE 00:07:06.890 12:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ff87e383-b345-4d94-985a-0fdfe8b2b838 00:07:06.890 12:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ff87e383-b345-4d94-985a-0fdfe8b2b838 00:07:07.150 12:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1453764 00:07:15.291 Initializing NVMe Controllers 00:07:15.291 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:15.291 Controller IO queue size 128, less than required. 00:07:15.291 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:15.291 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:15.291 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:15.291 Initialization complete. Launching workers. 00:07:15.291 ======================================================== 00:07:15.291 Latency(us) 00:07:15.291 Device Information : IOPS MiB/s Average min max 00:07:15.291 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12217.20 47.72 10481.15 1507.09 57321.96 00:07:15.291 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17543.40 68.53 7296.53 482.78 57800.47 00:07:15.291 ======================================================== 00:07:15.291 Total : 29760.60 116.25 8603.87 482.78 57800.47 00:07:15.291 00:07:15.291 12:11:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:15.552 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 72644d89-2c8b-4c98-b5f5-513b12245e63 00:07:15.814 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 00c21c85-fe17-4763-a038-a91f07093af0 00:07:15.814 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:15.814 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:15.814 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:15.814 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:15.814 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:16.075 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:16.075 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:16.075 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:16.075 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:16.075 rmmod nvme_tcp 00:07:16.075 rmmod nvme_fabrics 00:07:16.075 rmmod nvme_keyring 00:07:16.075 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:16.075 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:16.075 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:16.075 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1453261 ']' 00:07:16.075 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1453261 00:07:16.075 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1453261 ']' 00:07:16.075 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1453261 00:07:16.075 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:16.075 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.075 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1453261 00:07:16.075 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.075 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.075 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1453261' 00:07:16.075 killing process with pid 1453261 00:07:16.075 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1453261 00:07:16.075 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1453261 00:07:16.336 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:16.336 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:16.336 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:16.336 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:16.336 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:07:16.336 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:16.336 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:07:16.336 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:16.336 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:16.336 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.336 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.336 12:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.251 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:18.251 00:07:18.251 real 0m23.361s 00:07:18.251 user 1m3.829s 00:07:18.251 sys 0m8.353s 00:07:18.251 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.251 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:18.251 ************************************ 00:07:18.251 END TEST nvmf_lvol 00:07:18.251 ************************************ 00:07:18.251 12:11:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:18.251 12:11:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:18.251 12:11:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.251 12:11:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.251 ************************************ 00:07:18.251 START TEST nvmf_lvs_grow 00:07:18.251 ************************************ 00:07:18.512 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:18.512 * Looking for test storage... 00:07:18.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.512 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:18.512 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:18.512 12:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:18.512 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:18.512 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.512 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.512 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:18.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.513 --rc genhtml_branch_coverage=1 00:07:18.513 --rc genhtml_function_coverage=1 00:07:18.513 --rc genhtml_legend=1 00:07:18.513 --rc geninfo_all_blocks=1 00:07:18.513 --rc geninfo_unexecuted_blocks=1 00:07:18.513 00:07:18.513 ' 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:18.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.513 --rc genhtml_branch_coverage=1 00:07:18.513 --rc genhtml_function_coverage=1 00:07:18.513 --rc genhtml_legend=1 00:07:18.513 --rc geninfo_all_blocks=1 00:07:18.513 --rc geninfo_unexecuted_blocks=1 00:07:18.513 00:07:18.513 ' 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:18.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.513 --rc genhtml_branch_coverage=1 00:07:18.513 --rc genhtml_function_coverage=1 00:07:18.513 --rc genhtml_legend=1 00:07:18.513 --rc geninfo_all_blocks=1 00:07:18.513 --rc geninfo_unexecuted_blocks=1 00:07:18.513 00:07:18.513 ' 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:18.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.513 --rc genhtml_branch_coverage=1 00:07:18.513 --rc genhtml_function_coverage=1 00:07:18.513 --rc genhtml_legend=1 00:07:18.513 --rc geninfo_all_blocks=1 00:07:18.513 --rc geninfo_unexecuted_blocks=1 00:07:18.513 00:07:18.513 ' 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.513 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.514 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.514 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:18.514 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:18.514 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:18.514 12:11:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:26.658 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:26.658 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:26.658 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:26.658 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:07:26.658 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:26.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:07:26.659 00:07:26.659 --- 10.0.0.2 ping statistics --- 00:07:26.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.659 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:26.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:07:26.659 00:07:26.659 --- 10.0.0.1 ping statistics --- 00:07:26.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.659 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1460371 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1460371 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1460371 ']' 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.659 12:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:26.659 [2024-11-04 12:12:00.532206] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:07:26.659 [2024-11-04 12:12:00.532276] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.659 [2024-11-04 12:12:00.607134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.659 [2024-11-04 12:12:00.649781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.659 [2024-11-04 12:12:00.649830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.659 [2024-11-04 12:12:00.649838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.659 [2024-11-04 12:12:00.649845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.659 [2024-11-04 12:12:00.649851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.659 [2024-11-04 12:12:00.650497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.920 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.920 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:26.920 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:26.920 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:26.920 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:26.920 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.920 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:27.180 [2024-11-04 12:12:01.517996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.180 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:27.180 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.180 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.180 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:27.180 ************************************ 00:07:27.180 START TEST lvs_grow_clean 00:07:27.180 ************************************ 00:07:27.180 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:27.180 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:27.180 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:27.180 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:27.180 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:27.180 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:27.180 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:27.180 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:27.180 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:27.180 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:27.439 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:27.439 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:27.439 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c00576f8-55fd-42fd-b0c7-a92168c1dcd1 00:07:27.439 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:27.439 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c00576f8-55fd-42fd-b0c7-a92168c1dcd1 00:07:27.700 12:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:27.700 12:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:27.700 12:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c00576f8-55fd-42fd-b0c7-a92168c1dcd1 lvol 150 00:07:27.962 12:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=aef66d0d-dcfa-4971-941d-fef19e998e03 00:07:27.962 12:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:27.962 12:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:27.962 [2024-11-04 12:12:02.468499] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:27.962 [2024-11-04 12:12:02.468557] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:27.962 true 00:07:27.962 12:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c00576f8-55fd-42fd-b0c7-a92168c1dcd1 00:07:27.962 12:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:28.223 12:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:28.223 12:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:28.484 12:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aef66d0d-dcfa-4971-941d-fef19e998e03 00:07:28.484 12:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:28.745 [2024-11-04 12:12:03.142568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.745 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:28.745 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1460951 00:07:28.745 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:28.745 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:28.745 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1460951 /var/tmp/bdevperf.sock 00:07:28.745 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1460951 ']' 00:07:28.745 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:28.745 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.745 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:28.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:28.745 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.745 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:29.005 [2024-11-04 12:12:03.368431] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:07:29.005 [2024-11-04 12:12:03.368487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1460951 ] 00:07:29.005 [2024-11-04 12:12:03.447593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.005 [2024-11-04 12:12:03.483342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.577 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.577 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:29.577 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:29.838 Nvme0n1 00:07:30.099 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:30.099 [ 00:07:30.099 { 00:07:30.099 "name": "Nvme0n1", 00:07:30.099 "aliases": [ 00:07:30.099 "aef66d0d-dcfa-4971-941d-fef19e998e03" 00:07:30.099 ], 00:07:30.099 "product_name": "NVMe disk", 00:07:30.099 "block_size": 4096, 00:07:30.099 "num_blocks": 38912, 00:07:30.099 "uuid": "aef66d0d-dcfa-4971-941d-fef19e998e03", 00:07:30.099 "numa_id": 0, 00:07:30.099 "assigned_rate_limits": { 00:07:30.099 "rw_ios_per_sec": 0, 00:07:30.099 "rw_mbytes_per_sec": 0, 00:07:30.099 "r_mbytes_per_sec": 0, 00:07:30.099 "w_mbytes_per_sec": 0 00:07:30.099 }, 00:07:30.099 "claimed": false, 00:07:30.099 "zoned": false, 00:07:30.099 "supported_io_types": { 00:07:30.099 "read": true, 00:07:30.099 "write": true, 00:07:30.099 "unmap": true, 00:07:30.099 "flush": true, 00:07:30.099 "reset": true, 00:07:30.099 "nvme_admin": true, 00:07:30.099 "nvme_io": true, 00:07:30.099 "nvme_io_md": false, 00:07:30.099 "write_zeroes": true, 00:07:30.099 "zcopy": false, 00:07:30.099 "get_zone_info": false, 00:07:30.099 "zone_management": false, 00:07:30.099 "zone_append": false, 00:07:30.099 "compare": true, 00:07:30.099 "compare_and_write": true, 00:07:30.099 "abort": true, 00:07:30.099 "seek_hole": false, 00:07:30.099 "seek_data": false, 00:07:30.099 "copy": true, 00:07:30.099 "nvme_iov_md": false 00:07:30.099 }, 00:07:30.099 "memory_domains": [ 00:07:30.099 { 00:07:30.099 "dma_device_id": "system", 00:07:30.099 "dma_device_type": 1 00:07:30.099 } 00:07:30.099 ], 00:07:30.099 "driver_specific": { 00:07:30.099 "nvme": [ 00:07:30.099 { 00:07:30.099 "trid": { 00:07:30.099 "trtype": "TCP", 00:07:30.099 "adrfam": "IPv4", 00:07:30.099 "traddr": "10.0.0.2", 00:07:30.099 "trsvcid": "4420", 00:07:30.099 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:30.099 }, 00:07:30.099 "ctrlr_data": { 00:07:30.099 "cntlid": 1, 00:07:30.099 "vendor_id": "0x8086", 00:07:30.099 "model_number": "SPDK bdev Controller", 00:07:30.099 "serial_number": "SPDK0", 00:07:30.099 "firmware_revision": "25.01", 00:07:30.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:30.099 "oacs": { 00:07:30.099 "security": 0, 00:07:30.099 "format": 0, 00:07:30.099 "firmware": 0, 00:07:30.099 "ns_manage": 0 00:07:30.099 }, 00:07:30.099 "multi_ctrlr": true, 00:07:30.099 "ana_reporting": false 00:07:30.099 }, 00:07:30.099 "vs": { 00:07:30.100 "nvme_version": "1.3" 00:07:30.100 }, 00:07:30.100 "ns_data": { 00:07:30.100 "id": 1, 00:07:30.100 "can_share": true 00:07:30.100 } 00:07:30.100 } 00:07:30.100 ], 00:07:30.100 "mp_policy": "active_passive" 00:07:30.100 } 00:07:30.100 } 00:07:30.100 ] 00:07:30.100 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1461285 00:07:30.100 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:30.100 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:30.360 Running I/O for 10 seconds... 00:07:31.302 Latency(us) 00:07:31.302 [2024-11-04T11:12:05.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.302 Nvme0n1 : 1.00 17842.00 69.70 0.00 0.00 0.00 0.00 0.00 00:07:31.302 [2024-11-04T11:12:05.872Z] =================================================================================================================== 00:07:31.302 [2024-11-04T11:12:05.872Z] Total : 17842.00 69.70 0.00 0.00 0.00 0.00 0.00 00:07:31.302 00:07:32.245 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c00576f8-55fd-42fd-b0c7-a92168c1dcd1 00:07:32.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.245 Nvme0n1 : 2.00 17906.00 69.95 0.00 0.00 0.00 0.00 0.00 00:07:32.245 [2024-11-04T11:12:06.815Z] =================================================================================================================== 00:07:32.245 [2024-11-04T11:12:06.815Z] Total : 17906.00 69.95 0.00 0.00 0.00 0.00 0.00 00:07:32.245 00:07:32.245 true 00:07:32.245 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c00576f8-55fd-42fd-b0c7-a92168c1dcd1 00:07:32.245 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:32.505 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:32.505 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:32.505 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1461285 00:07:33.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.448 Nvme0n1 : 3.00 17968.33 70.19 0.00 0.00 0.00 0.00 0.00 00:07:33.448 [2024-11-04T11:12:08.018Z] =================================================================================================================== 00:07:33.448 [2024-11-04T11:12:08.018Z] Total : 17968.33 70.19 0.00 0.00 0.00 0.00 0.00 00:07:33.448 00:07:34.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.388 Nvme0n1 : 4.00 18013.25 70.36 0.00 0.00 0.00 0.00 0.00 00:07:34.388 [2024-11-04T11:12:08.958Z] =================================================================================================================== 00:07:34.388 [2024-11-04T11:12:08.958Z] Total : 18013.25 70.36 0.00 0.00 0.00 0.00 0.00 00:07:34.388 00:07:35.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.330 Nvme0n1 : 5.00 18035.20 70.45 0.00 0.00 0.00 0.00 0.00 00:07:35.330 [2024-11-04T11:12:09.900Z] =================================================================================================================== 00:07:35.330 [2024-11-04T11:12:09.900Z] Total : 18035.20 70.45 0.00 0.00 0.00 0.00 0.00 00:07:35.330 00:07:36.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.319 Nvme0n1 : 6.00 18046.33 70.49 0.00 0.00 0.00 0.00 0.00 00:07:36.319 [2024-11-04T11:12:10.889Z] =================================================================================================================== 00:07:36.319 [2024-11-04T11:12:10.889Z] Total : 18046.33 70.49 0.00 0.00 0.00 0.00 0.00 00:07:36.319 00:07:37.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.331 Nvme0n1 : 7.00 18064.57 70.56 0.00 0.00 0.00 0.00 0.00 00:07:37.331 [2024-11-04T11:12:11.901Z] =================================================================================================================== 00:07:37.331 [2024-11-04T11:12:11.901Z] Total : 18064.57 70.56 0.00 0.00 0.00 0.00 0.00 00:07:37.331 00:07:38.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.274 Nvme0n1 : 8.00 18079.62 70.62 0.00 0.00 0.00 0.00 0.00 00:07:38.274 [2024-11-04T11:12:12.844Z] =================================================================================================================== 00:07:38.274 [2024-11-04T11:12:12.844Z] Total : 18079.62 70.62 0.00 0.00 0.00 0.00 0.00 00:07:38.274 00:07:39.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.216 Nvme0n1 : 9.00 18097.67 70.69 0.00 0.00 0.00 0.00 0.00 00:07:39.216 [2024-11-04T11:12:13.786Z] =================================================================================================================== 00:07:39.216 [2024-11-04T11:12:13.786Z] Total : 18097.67 70.69 0.00 0.00 0.00 0.00 0.00 00:07:39.216 00:07:40.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.158 Nvme0n1 : 10.00 18107.10 70.73 0.00 0.00 0.00 0.00 0.00 00:07:40.158 [2024-11-04T11:12:14.728Z] =================================================================================================================== 00:07:40.158 [2024-11-04T11:12:14.728Z] Total : 18107.10 70.73 0.00 0.00 0.00 0.00 0.00 00:07:40.158 00:07:40.158 00:07:40.158 Latency(us) 00:07:40.158 [2024-11-04T11:12:14.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.158 Nvme0n1 : 10.00 18109.10 70.74 0.00 0.00 7066.32 4423.68 12943.36 00:07:40.158 [2024-11-04T11:12:14.728Z] =================================================================================================================== 00:07:40.158 [2024-11-04T11:12:14.728Z] Total : 18109.10 70.74 0.00 0.00 7066.32 4423.68 12943.36 00:07:40.158 { 00:07:40.158 "results": [ 00:07:40.158 { 00:07:40.158 "job": "Nvme0n1", 00:07:40.158 "core_mask": "0x2", 00:07:40.158 "workload": "randwrite", 00:07:40.158 "status": "finished", 00:07:40.158 "queue_depth": 128, 00:07:40.158 "io_size": 4096, 00:07:40.158 "runtime": 10.002432, 00:07:40.158 "iops": 18109.09586788493, 00:07:40.158 "mibps": 70.73865573392551, 00:07:40.158 "io_failed": 0, 00:07:40.158 "io_timeout": 0, 00:07:40.158 "avg_latency_us": 7066.3185270654485, 00:07:40.158 "min_latency_us": 4423.68, 00:07:40.158 "max_latency_us": 12943.36 00:07:40.158 } 00:07:40.158 ], 00:07:40.158 "core_count": 1 00:07:40.158 } 00:07:40.158 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1460951 00:07:40.158 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1460951 ']' 00:07:40.158 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1460951 00:07:40.419 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:40.419 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.419 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1460951 00:07:40.419 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:40.419 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:40.419 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1460951' 00:07:40.419 killing process with pid 1460951 00:07:40.419 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1460951 00:07:40.419 Received shutdown signal, test time was about 10.000000 seconds 00:07:40.419 00:07:40.419 Latency(us) 00:07:40.419 [2024-11-04T11:12:14.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.419 [2024-11-04T11:12:14.989Z] =================================================================================================================== 00:07:40.419 [2024-11-04T11:12:14.989Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:40.419 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1460951 00:07:40.419 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:40.680 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:40.941 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c00576f8-55fd-42fd-b0c7-a92168c1dcd1 00:07:40.941 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:40.941 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:40.941 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:40.941 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:41.201 [2024-11-04 12:12:15.562269] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:41.201 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c00576f8-55fd-42fd-b0c7-a92168c1dcd1 00:07:41.201 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:41.201 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c00576f8-55fd-42fd-b0c7-a92168c1dcd1 00:07:41.201 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:41.201 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.201 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:41.201 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.201 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:41.201 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.201 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:41.201 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:41.201 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c00576f8-55fd-42fd-b0c7-a92168c1dcd1 00:07:41.462 request: 00:07:41.462 { 00:07:41.462 "uuid": "c00576f8-55fd-42fd-b0c7-a92168c1dcd1", 00:07:41.462 "method": "bdev_lvol_get_lvstores", 00:07:41.462 "req_id": 1 00:07:41.462 } 00:07:41.462 Got JSON-RPC error response 00:07:41.462 response: 00:07:41.462 { 00:07:41.462 "code": -19, 00:07:41.462 "message": "No such device" 00:07:41.462 } 00:07:41.462 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:41.462 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:41.462 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:41.462 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:41.462 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:41.462 aio_bdev 00:07:41.462 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev aef66d0d-dcfa-4971-941d-fef19e998e03 00:07:41.462 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=aef66d0d-dcfa-4971-941d-fef19e998e03 00:07:41.462 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:41.462 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:41.462 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:41.462 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:41.462 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:41.722 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aef66d0d-dcfa-4971-941d-fef19e998e03 -t 2000 00:07:41.722 [ 00:07:41.722 { 00:07:41.722 "name": "aef66d0d-dcfa-4971-941d-fef19e998e03", 00:07:41.722 "aliases": [ 00:07:41.722 "lvs/lvol" 00:07:41.722 ], 00:07:41.722 "product_name": "Logical Volume", 00:07:41.722 "block_size": 4096, 00:07:41.722 "num_blocks": 38912, 00:07:41.722 "uuid": "aef66d0d-dcfa-4971-941d-fef19e998e03", 00:07:41.722 "assigned_rate_limits": { 00:07:41.722 "rw_ios_per_sec": 0, 00:07:41.722 "rw_mbytes_per_sec": 0, 00:07:41.722 "r_mbytes_per_sec": 0, 00:07:41.722 "w_mbytes_per_sec": 0 00:07:41.722 }, 00:07:41.722 "claimed": false, 00:07:41.722 "zoned": false, 00:07:41.723 "supported_io_types": { 00:07:41.723 "read": true, 00:07:41.723 "write": true, 00:07:41.723 "unmap": true, 00:07:41.723 "flush": false, 00:07:41.723 "reset": true, 00:07:41.723 "nvme_admin": false, 00:07:41.723 "nvme_io": false, 00:07:41.723 "nvme_io_md": false, 00:07:41.723 "write_zeroes": true, 00:07:41.723 "zcopy": false, 00:07:41.723 "get_zone_info": false, 00:07:41.723 "zone_management": false, 00:07:41.723 "zone_append": false, 00:07:41.723 "compare": false, 00:07:41.723 "compare_and_write": false, 00:07:41.723 "abort": false, 00:07:41.723 "seek_hole": true, 00:07:41.723 "seek_data": true, 00:07:41.723 "copy": false, 00:07:41.723 "nvme_iov_md": false 00:07:41.723 }, 00:07:41.723 "driver_specific": { 00:07:41.723 "lvol": { 00:07:41.723 "lvol_store_uuid": "c00576f8-55fd-42fd-b0c7-a92168c1dcd1", 00:07:41.723 "base_bdev": "aio_bdev", 00:07:41.723 "thin_provision": false, 00:07:41.723 "num_allocated_clusters": 38, 00:07:41.723 "snapshot": false, 00:07:41.723 "clone": false, 00:07:41.723 "esnap_clone": false 00:07:41.723 } 00:07:41.723 } 00:07:41.723 } 00:07:41.723 ] 00:07:41.983 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:41.983 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c00576f8-55fd-42fd-b0c7-a92168c1dcd1 00:07:41.983 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:41.983 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:41.983 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c00576f8-55fd-42fd-b0c7-a92168c1dcd1 00:07:41.983 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:42.244 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:42.244 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete aef66d0d-dcfa-4971-941d-fef19e998e03 00:07:42.244 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c00576f8-55fd-42fd-b0c7-a92168c1dcd1 00:07:42.504 12:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:42.765 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:42.765 00:07:42.765 real 0m15.580s 00:07:42.765 user 0m15.322s 00:07:42.765 sys 0m1.291s 00:07:42.765 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.765 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:42.765 ************************************ 00:07:42.765 END TEST lvs_grow_clean 00:07:42.765 ************************************ 00:07:42.765 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:42.765 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:42.765 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.765 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.765 ************************************ 00:07:42.765 START TEST lvs_grow_dirty 00:07:42.765 ************************************ 00:07:42.765 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:42.765 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:42.765 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:42.765 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:42.765 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:42.765 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:42.765 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:42.765 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:42.765 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:42.765 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:43.025 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:43.025 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:43.286 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=79eeac01-005d-40f7-9a1a-ec246e9c562c 00:07:43.286 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79eeac01-005d-40f7-9a1a-ec246e9c562c 00:07:43.286 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:43.286 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:43.286 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:43.286 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 79eeac01-005d-40f7-9a1a-ec246e9c562c lvol 150 00:07:43.546 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2b3b0afe-5835-4352-8625-07f8160871db 00:07:43.546 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:43.546 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:43.806 [2024-11-04 12:12:18.137924] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:43.806 [2024-11-04 12:12:18.137977] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:43.806 true 00:07:43.806 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79eeac01-005d-40f7-9a1a-ec246e9c562c 00:07:43.806 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:43.806 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:43.806 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:44.066 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2b3b0afe-5835-4352-8625-07f8160871db 00:07:44.327 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:44.327 [2024-11-04 12:12:18.775889] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.327 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:44.589 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1464670 00:07:44.589 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:44.589 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:44.589 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1464670 /var/tmp/bdevperf.sock 00:07:44.589 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1464670 ']' 00:07:44.589 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:44.589 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.589 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:44.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:44.589 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.589 12:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:44.589 [2024-11-04 12:12:19.019380] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:07:44.589 [2024-11-04 12:12:19.019433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464670 ] 00:07:44.589 [2024-11-04 12:12:19.093635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.589 [2024-11-04 12:12:19.123268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.531 12:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.531 12:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:45.531 12:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:45.531 Nvme0n1 00:07:45.531 12:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:45.793 [ 00:07:45.793 { 00:07:45.793 "name": "Nvme0n1", 00:07:45.793 "aliases": [ 00:07:45.793 "2b3b0afe-5835-4352-8625-07f8160871db" 00:07:45.793 ], 00:07:45.793 "product_name": "NVMe disk", 00:07:45.793 "block_size": 4096, 00:07:45.793 "num_blocks": 38912, 00:07:45.793 "uuid": "2b3b0afe-5835-4352-8625-07f8160871db", 00:07:45.793 "numa_id": 0, 00:07:45.793 "assigned_rate_limits": { 00:07:45.793 "rw_ios_per_sec": 0, 00:07:45.793 "rw_mbytes_per_sec": 0, 00:07:45.793 "r_mbytes_per_sec": 0, 00:07:45.793 "w_mbytes_per_sec": 0 00:07:45.793 }, 00:07:45.793 "claimed": false, 00:07:45.793 "zoned": false, 00:07:45.793 "supported_io_types": { 00:07:45.793 "read": true, 00:07:45.793 "write": true, 00:07:45.793 "unmap": true, 00:07:45.793 "flush": true, 00:07:45.793 "reset": true, 00:07:45.793 "nvme_admin": true, 00:07:45.793 "nvme_io": true, 00:07:45.793 "nvme_io_md": false, 00:07:45.793 "write_zeroes": true, 00:07:45.793 "zcopy": false, 00:07:45.793 "get_zone_info": false, 00:07:45.793 "zone_management": false, 00:07:45.793 "zone_append": false, 00:07:45.793 "compare": true, 00:07:45.793 "compare_and_write": true, 00:07:45.793 "abort": true, 00:07:45.793 "seek_hole": false, 00:07:45.793 "seek_data": false, 00:07:45.793 "copy": true, 00:07:45.793 "nvme_iov_md": false 00:07:45.793 }, 00:07:45.793 "memory_domains": [ 00:07:45.793 { 00:07:45.793 "dma_device_id": "system", 00:07:45.793 "dma_device_type": 1 00:07:45.793 } 00:07:45.793 ], 00:07:45.793 "driver_specific": { 00:07:45.793 "nvme": [ 00:07:45.793 { 00:07:45.793 "trid": { 00:07:45.793 "trtype": "TCP", 00:07:45.793 "adrfam": "IPv4", 00:07:45.793 "traddr": "10.0.0.2", 00:07:45.793 "trsvcid": "4420", 00:07:45.793 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:45.793 }, 00:07:45.793 "ctrlr_data": { 00:07:45.793 "cntlid": 1, 00:07:45.793 "vendor_id": "0x8086", 00:07:45.793 "model_number": "SPDK bdev Controller", 00:07:45.793 "serial_number": "SPDK0", 00:07:45.793 "firmware_revision": "25.01", 00:07:45.793 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:45.793 "oacs": { 00:07:45.793 "security": 0, 00:07:45.793 "format": 0, 00:07:45.793 "firmware": 0, 00:07:45.793 "ns_manage": 0 00:07:45.793 }, 00:07:45.793 "multi_ctrlr": true, 00:07:45.793 "ana_reporting": false 00:07:45.793 }, 00:07:45.793 "vs": { 00:07:45.793 "nvme_version": "1.3" 00:07:45.793 }, 00:07:45.793 "ns_data": { 00:07:45.793 "id": 1, 00:07:45.793 "can_share": true 00:07:45.793 } 00:07:45.793 } 00:07:45.793 ], 00:07:45.793 "mp_policy": "active_passive" 00:07:45.793 } 00:07:45.793 } 00:07:45.793 ] 00:07:45.793 12:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1464851 00:07:45.793 12:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:45.793 12:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:45.793 Running I/O for 10 seconds... 00:07:46.736 Latency(us) 00:07:46.736 [2024-11-04T11:12:21.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.736 Nvme0n1 : 1.00 17881.00 69.85 0.00 0.00 0.00 0.00 0.00 00:07:46.736 [2024-11-04T11:12:21.306Z] =================================================================================================================== 00:07:46.736 [2024-11-04T11:12:21.306Z] Total : 17881.00 69.85 0.00 0.00 0.00 0.00 0.00 00:07:46.736 00:07:47.678 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 79eeac01-005d-40f7-9a1a-ec246e9c562c 00:07:47.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.940 Nvme0n1 : 2.00 17950.00 70.12 0.00 0.00 0.00 0.00 0.00 00:07:47.940 [2024-11-04T11:12:22.510Z] =================================================================================================================== 00:07:47.940 [2024-11-04T11:12:22.510Z] Total : 17950.00 70.12 0.00 0.00 0.00 0.00 0.00 00:07:47.940 00:07:47.940 true 00:07:47.940 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79eeac01-005d-40f7-9a1a-ec246e9c562c 00:07:47.940 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:48.201 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:48.201 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:48.201 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1464851 00:07:48.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.772 Nvme0n1 : 3.00 17967.33 70.18 0.00 0.00 0.00 0.00 0.00 00:07:48.772 [2024-11-04T11:12:23.342Z] =================================================================================================================== 00:07:48.772 [2024-11-04T11:12:23.342Z] Total : 17967.33 70.18 0.00 0.00 0.00 0.00 0.00 00:07:48.772 00:07:50.160 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.160 Nvme0n1 : 4.00 18007.75 70.34 0.00 0.00 0.00 0.00 0.00 00:07:50.160 [2024-11-04T11:12:24.730Z] =================================================================================================================== 00:07:50.160 [2024-11-04T11:12:24.730Z] Total : 18007.75 70.34 0.00 0.00 0.00 0.00 0.00 00:07:50.160 00:07:51.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.103 Nvme0n1 : 5.00 18035.80 70.45 0.00 0.00 0.00 0.00 0.00 00:07:51.103 [2024-11-04T11:12:25.673Z] =================================================================================================================== 00:07:51.103 [2024-11-04T11:12:25.673Z] Total : 18035.80 70.45 0.00 0.00 0.00 0.00 0.00 00:07:51.103 00:07:52.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.046 Nvme0n1 : 6.00 18052.33 70.52 0.00 0.00 0.00 0.00 0.00 00:07:52.046 [2024-11-04T11:12:26.616Z] =================================================================================================================== 00:07:52.046 [2024-11-04T11:12:26.616Z] Total : 18052.33 70.52 0.00 0.00 0.00 0.00 0.00 00:07:52.046 00:07:52.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.988 Nvme0n1 : 7.00 18062.29 70.56 0.00 0.00 0.00 0.00 0.00 00:07:52.988 [2024-11-04T11:12:27.558Z] =================================================================================================================== 00:07:52.988 [2024-11-04T11:12:27.558Z] Total : 18062.29 70.56 0.00 0.00 0.00 0.00 0.00 00:07:52.988 00:07:53.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.931 Nvme0n1 : 8.00 18079.75 70.62 0.00 0.00 0.00 0.00 0.00 00:07:53.931 [2024-11-04T11:12:28.501Z] =================================================================================================================== 00:07:53.931 [2024-11-04T11:12:28.501Z] Total : 18079.75 70.62 0.00 0.00 0.00 0.00 0.00 00:07:53.931 00:07:54.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.875 Nvme0n1 : 9.00 18087.00 70.65 0.00 0.00 0.00 0.00 0.00 00:07:54.875 [2024-11-04T11:12:29.445Z] =================================================================================================================== 00:07:54.875 [2024-11-04T11:12:29.445Z] Total : 18087.00 70.65 0.00 0.00 0.00 0.00 0.00 00:07:54.875 00:07:55.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.857 Nvme0n1 : 10.00 18084.60 70.64 0.00 0.00 0.00 0.00 0.00 00:07:55.857 [2024-11-04T11:12:30.427Z] =================================================================================================================== 00:07:55.857 [2024-11-04T11:12:30.427Z] Total : 18084.60 70.64 0.00 0.00 0.00 0.00 0.00 00:07:55.857 00:07:55.857 00:07:55.857 Latency(us) 00:07:55.857 [2024-11-04T11:12:30.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.857 Nvme0n1 : 10.00 18093.36 70.68 0.00 0.00 7071.82 1733.97 12724.91 00:07:55.857 [2024-11-04T11:12:30.427Z] =================================================================================================================== 00:07:55.857 [2024-11-04T11:12:30.427Z] Total : 18093.36 70.68 0.00 0.00 7071.82 1733.97 12724.91 00:07:55.857 { 00:07:55.857 "results": [ 00:07:55.857 { 00:07:55.857 "job": "Nvme0n1", 00:07:55.857 "core_mask": "0x2", 00:07:55.857 "workload": "randwrite", 00:07:55.857 "status": "finished", 00:07:55.857 "queue_depth": 128, 00:07:55.857 "io_size": 4096, 00:07:55.857 "runtime": 10.002234, 00:07:55.857 "iops": 18093.357943835348, 00:07:55.857 "mibps": 70.67717946810683, 00:07:55.857 "io_failed": 0, 00:07:55.857 "io_timeout": 0, 00:07:55.857 "avg_latency_us": 7071.815856274014, 00:07:55.857 "min_latency_us": 1733.9733333333334, 00:07:55.857 "max_latency_us": 12724.906666666666 00:07:55.857 } 00:07:55.857 ], 00:07:55.857 "core_count": 1 00:07:55.857 } 00:07:55.857 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1464670 00:07:55.857 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1464670 ']' 00:07:55.857 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1464670 00:07:55.857 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:07:55.857 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:55.857 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1464670 00:07:55.857 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:55.857 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:55.857 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1464670' 00:07:55.857 killing process with pid 1464670 00:07:55.857 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1464670 00:07:55.857 Received shutdown signal, test time was about 10.000000 seconds 00:07:55.857 00:07:55.857 Latency(us) 00:07:55.857 [2024-11-04T11:12:30.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.857 [2024-11-04T11:12:30.427Z] =================================================================================================================== 00:07:55.857 [2024-11-04T11:12:30.427Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:55.857 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1464670 00:07:56.119 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:56.119 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:56.381 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79eeac01-005d-40f7-9a1a-ec246e9c562c 00:07:56.381 12:12:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:56.642 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:56.642 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:56.642 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1460371 00:07:56.642 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1460371 00:07:56.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1460371 Killed "${NVMF_APP[@]}" "$@" 00:07:56.642 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:56.642 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:56.642 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:56.642 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:56.642 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:56.642 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1467154 00:07:56.642 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1467154 00:07:56.642 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:56.642 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1467154 ']' 00:07:56.642 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.642 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.642 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.642 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.642 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:56.642 [2024-11-04 12:12:31.134427] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:07:56.642 [2024-11-04 12:12:31.134485] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.642 [2024-11-04 12:12:31.202354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.903 [2024-11-04 12:12:31.239156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.903 [2024-11-04 12:12:31.239192] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.903 [2024-11-04 12:12:31.239200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.903 [2024-11-04 12:12:31.239207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.903 [2024-11-04 12:12:31.239213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.903 [2024-11-04 12:12:31.239829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.903 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.903 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:56.903 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:56.903 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:56.903 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:56.903 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.904 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:57.165 [2024-11-04 12:12:31.524734] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:57.165 [2024-11-04 12:12:31.524830] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:57.165 [2024-11-04 12:12:31.524861] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:57.165 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:57.165 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2b3b0afe-5835-4352-8625-07f8160871db 00:07:57.165 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=2b3b0afe-5835-4352-8625-07f8160871db 00:07:57.165 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:57.165 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:57.165 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:57.165 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:57.165 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:57.426 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2b3b0afe-5835-4352-8625-07f8160871db -t 2000 00:07:57.426 [ 00:07:57.426 { 00:07:57.426 "name": "2b3b0afe-5835-4352-8625-07f8160871db", 00:07:57.426 "aliases": [ 00:07:57.426 "lvs/lvol" 00:07:57.426 ], 00:07:57.426 "product_name": "Logical Volume", 00:07:57.426 "block_size": 4096, 00:07:57.426 "num_blocks": 38912, 00:07:57.426 "uuid": "2b3b0afe-5835-4352-8625-07f8160871db", 00:07:57.426 "assigned_rate_limits": { 00:07:57.426 "rw_ios_per_sec": 0, 00:07:57.426 "rw_mbytes_per_sec": 0, 00:07:57.426 "r_mbytes_per_sec": 0, 00:07:57.426 "w_mbytes_per_sec": 0 00:07:57.426 }, 00:07:57.426 "claimed": false, 00:07:57.426 "zoned": false, 00:07:57.426 "supported_io_types": { 00:07:57.426 "read": true, 00:07:57.426 "write": true, 00:07:57.426 "unmap": true, 00:07:57.426 "flush": false, 00:07:57.426 "reset": true, 00:07:57.426 "nvme_admin": false, 00:07:57.426 "nvme_io": false, 00:07:57.426 "nvme_io_md": false, 00:07:57.426 "write_zeroes": true, 00:07:57.426 "zcopy": false, 00:07:57.426 "get_zone_info": false, 00:07:57.426 "zone_management": false, 00:07:57.426 "zone_append": false, 00:07:57.426 "compare": false, 00:07:57.426 "compare_and_write": false, 00:07:57.426 "abort": false, 00:07:57.426 "seek_hole": true, 00:07:57.426 "seek_data": true, 00:07:57.426 "copy": false, 00:07:57.426 "nvme_iov_md": false 00:07:57.426 }, 00:07:57.426 "driver_specific": { 00:07:57.426 "lvol": { 00:07:57.426 "lvol_store_uuid": "79eeac01-005d-40f7-9a1a-ec246e9c562c", 00:07:57.426 "base_bdev": "aio_bdev", 00:07:57.426 "thin_provision": false, 00:07:57.426 "num_allocated_clusters": 38, 00:07:57.426 "snapshot": false, 00:07:57.426 "clone": false, 00:07:57.426 "esnap_clone": false 00:07:57.426 } 00:07:57.426 } 00:07:57.426 } 00:07:57.426 ] 00:07:57.426 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:57.426 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79eeac01-005d-40f7-9a1a-ec246e9c562c 00:07:57.426 12:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:57.688 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:57.688 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79eeac01-005d-40f7-9a1a-ec246e9c562c 00:07:57.688 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:57.949 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:57.949 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:57.949 [2024-11-04 12:12:32.417100] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:57.949 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79eeac01-005d-40f7-9a1a-ec246e9c562c 00:07:57.949 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:57.949 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79eeac01-005d-40f7-9a1a-ec246e9c562c 00:07:57.949 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.949 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.949 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.949 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.949 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.949 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.949 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.949 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:57.949 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79eeac01-005d-40f7-9a1a-ec246e9c562c 00:07:58.210 request: 00:07:58.210 { 00:07:58.210 "uuid": "79eeac01-005d-40f7-9a1a-ec246e9c562c", 00:07:58.210 "method": "bdev_lvol_get_lvstores", 00:07:58.210 "req_id": 1 00:07:58.210 } 00:07:58.210 Got JSON-RPC error response 00:07:58.210 response: 00:07:58.210 { 00:07:58.210 "code": -19, 00:07:58.210 "message": "No such device" 00:07:58.210 } 00:07:58.210 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:58.210 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:58.210 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:58.210 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:58.210 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:58.471 aio_bdev 00:07:58.471 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2b3b0afe-5835-4352-8625-07f8160871db 00:07:58.471 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=2b3b0afe-5835-4352-8625-07f8160871db 00:07:58.471 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:58.471 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:58.471 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:58.471 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:58.471 12:12:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:58.471 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2b3b0afe-5835-4352-8625-07f8160871db -t 2000 00:07:58.731 [ 00:07:58.731 { 00:07:58.731 "name": "2b3b0afe-5835-4352-8625-07f8160871db", 00:07:58.731 "aliases": [ 00:07:58.731 "lvs/lvol" 00:07:58.731 ], 00:07:58.731 "product_name": "Logical Volume", 00:07:58.731 "block_size": 4096, 00:07:58.731 "num_blocks": 38912, 00:07:58.731 "uuid": "2b3b0afe-5835-4352-8625-07f8160871db", 00:07:58.731 "assigned_rate_limits": { 00:07:58.731 "rw_ios_per_sec": 0, 00:07:58.731 "rw_mbytes_per_sec": 0, 00:07:58.731 "r_mbytes_per_sec": 0, 00:07:58.731 "w_mbytes_per_sec": 0 00:07:58.731 }, 00:07:58.731 "claimed": false, 00:07:58.731 "zoned": false, 00:07:58.731 "supported_io_types": { 00:07:58.731 "read": true, 00:07:58.731 "write": true, 00:07:58.731 "unmap": true, 00:07:58.731 "flush": false, 00:07:58.731 "reset": true, 00:07:58.731 "nvme_admin": false, 00:07:58.731 "nvme_io": false, 00:07:58.731 "nvme_io_md": false, 00:07:58.731 "write_zeroes": true, 00:07:58.731 "zcopy": false, 00:07:58.731 "get_zone_info": false, 00:07:58.731 "zone_management": false, 00:07:58.731 "zone_append": false, 00:07:58.731 "compare": false, 00:07:58.731 "compare_and_write": false, 00:07:58.731 "abort": false, 00:07:58.731 "seek_hole": true, 00:07:58.731 "seek_data": true, 00:07:58.731 "copy": false, 00:07:58.731 "nvme_iov_md": false 00:07:58.731 }, 00:07:58.731 "driver_specific": { 00:07:58.731 "lvol": { 00:07:58.731 "lvol_store_uuid": "79eeac01-005d-40f7-9a1a-ec246e9c562c", 00:07:58.731 "base_bdev": "aio_bdev", 00:07:58.731 "thin_provision": false, 00:07:58.731 "num_allocated_clusters": 38, 00:07:58.731 "snapshot": false, 00:07:58.731 "clone": false, 00:07:58.731 "esnap_clone": false 00:07:58.731 } 00:07:58.731 } 00:07:58.731 } 00:07:58.731 ] 00:07:58.731 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:58.731 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79eeac01-005d-40f7-9a1a-ec246e9c562c 00:07:58.731 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:58.992 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:58.992 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 79eeac01-005d-40f7-9a1a-ec246e9c562c 00:07:58.992 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:58.992 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:58.992 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2b3b0afe-5835-4352-8625-07f8160871db 00:07:59.252 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 79eeac01-005d-40f7-9a1a-ec246e9c562c 00:07:59.512 12:12:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:59.512 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:59.774 00:07:59.774 real 0m16.835s 00:07:59.774 user 0m45.240s 00:07:59.774 sys 0m2.777s 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:59.774 ************************************ 00:07:59.774 END TEST lvs_grow_dirty 00:07:59.774 ************************************ 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:59.774 nvmf_trace.0 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.774 rmmod nvme_tcp 00:07:59.774 rmmod nvme_fabrics 00:07:59.774 rmmod nvme_keyring 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1467154 ']' 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1467154 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1467154 ']' 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1467154 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1467154 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1467154' 00:07:59.774 killing process with pid 1467154 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1467154 00:07:59.774 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1467154 00:08:00.036 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:00.036 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:00.036 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:00.036 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:00.036 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:08:00.036 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:00.036 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:08:00.036 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:00.036 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:00.036 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.036 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.036 12:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:02.583 00:08:02.583 real 0m43.706s 00:08:02.583 user 1m6.383s 00:08:02.583 sys 0m10.110s 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:02.583 ************************************ 00:08:02.583 END TEST nvmf_lvs_grow 00:08:02.583 ************************************ 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:02.583 ************************************ 00:08:02.583 START TEST nvmf_bdev_io_wait 00:08:02.583 ************************************ 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:02.583 * Looking for test storage... 00:08:02.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:02.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.583 --rc genhtml_branch_coverage=1 00:08:02.583 --rc genhtml_function_coverage=1 00:08:02.583 --rc genhtml_legend=1 00:08:02.583 --rc geninfo_all_blocks=1 00:08:02.583 --rc geninfo_unexecuted_blocks=1 00:08:02.583 00:08:02.583 ' 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:02.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.583 --rc genhtml_branch_coverage=1 00:08:02.583 --rc genhtml_function_coverage=1 00:08:02.583 --rc genhtml_legend=1 00:08:02.583 --rc geninfo_all_blocks=1 00:08:02.583 --rc geninfo_unexecuted_blocks=1 00:08:02.583 00:08:02.583 ' 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:02.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.583 --rc genhtml_branch_coverage=1 00:08:02.583 --rc genhtml_function_coverage=1 00:08:02.583 --rc genhtml_legend=1 00:08:02.583 --rc geninfo_all_blocks=1 00:08:02.583 --rc geninfo_unexecuted_blocks=1 00:08:02.583 00:08:02.583 ' 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:02.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.583 --rc genhtml_branch_coverage=1 00:08:02.583 --rc genhtml_function_coverage=1 00:08:02.583 --rc genhtml_legend=1 00:08:02.583 --rc geninfo_all_blocks=1 00:08:02.583 --rc geninfo_unexecuted_blocks=1 00:08:02.583 00:08:02.583 ' 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.583 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:02.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:02.584 12:12:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:09.169 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:09.170 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:09.170 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:09.170 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:09.170 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:09.170 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:09.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:08:09.431 00:08:09.431 --- 10.0.0.2 ping statistics --- 00:08:09.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.431 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:08:09.431 00:08:09.431 --- 10.0.0.1 ping statistics --- 00:08:09.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.431 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1471946 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1471946 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:09.431 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1471946 ']' 00:08:09.432 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.432 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.432 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.432 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.432 12:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:09.432 [2024-11-04 12:12:43.900031] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:08:09.432 [2024-11-04 12:12:43.900095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.432 [2024-11-04 12:12:43.971617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:09.692 [2024-11-04 12:12:44.015895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.692 [2024-11-04 12:12:44.015931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.692 [2024-11-04 12:12:44.015939] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.692 [2024-11-04 12:12:44.015945] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.692 [2024-11-04 12:12:44.015951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.692 [2024-11-04 12:12:44.017544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.692 [2024-11-04 12:12:44.017653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.692 [2024-11-04 12:12:44.017810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.692 [2024-11-04 12:12:44.017811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.263 [2024-11-04 12:12:44.809913] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.263 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.524 Malloc0 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.524 [2024-11-04 12:12:44.869131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1472297 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1472299 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:10.524 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:10.524 { 00:08:10.525 "params": { 00:08:10.525 "name": "Nvme$subsystem", 00:08:10.525 "trtype": "$TEST_TRANSPORT", 00:08:10.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:10.525 "adrfam": "ipv4", 00:08:10.525 "trsvcid": "$NVMF_PORT", 00:08:10.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:10.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:10.525 "hdgst": ${hdgst:-false}, 00:08:10.525 "ddgst": ${ddgst:-false} 00:08:10.525 }, 00:08:10.525 "method": "bdev_nvme_attach_controller" 00:08:10.525 } 00:08:10.525 EOF 00:08:10.525 )") 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1472301 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1472304 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:10.525 { 00:08:10.525 "params": { 00:08:10.525 "name": "Nvme$subsystem", 00:08:10.525 "trtype": "$TEST_TRANSPORT", 00:08:10.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:10.525 "adrfam": "ipv4", 00:08:10.525 "trsvcid": "$NVMF_PORT", 00:08:10.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:10.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:10.525 "hdgst": ${hdgst:-false}, 00:08:10.525 "ddgst": ${ddgst:-false} 00:08:10.525 }, 00:08:10.525 "method": "bdev_nvme_attach_controller" 00:08:10.525 } 00:08:10.525 EOF 00:08:10.525 )") 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:10.525 { 00:08:10.525 "params": { 00:08:10.525 "name": "Nvme$subsystem", 00:08:10.525 "trtype": "$TEST_TRANSPORT", 00:08:10.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:10.525 "adrfam": "ipv4", 00:08:10.525 "trsvcid": "$NVMF_PORT", 00:08:10.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:10.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:10.525 "hdgst": ${hdgst:-false}, 00:08:10.525 "ddgst": ${ddgst:-false} 00:08:10.525 }, 00:08:10.525 "method": "bdev_nvme_attach_controller" 00:08:10.525 } 00:08:10.525 EOF 00:08:10.525 )") 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:10.525 { 00:08:10.525 "params": { 00:08:10.525 "name": "Nvme$subsystem", 00:08:10.525 "trtype": "$TEST_TRANSPORT", 00:08:10.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:10.525 "adrfam": "ipv4", 00:08:10.525 "trsvcid": "$NVMF_PORT", 00:08:10.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:10.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:10.525 "hdgst": ${hdgst:-false}, 00:08:10.525 "ddgst": ${ddgst:-false} 00:08:10.525 }, 00:08:10.525 "method": "bdev_nvme_attach_controller" 00:08:10.525 } 00:08:10.525 EOF 00:08:10.525 )") 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1472297 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:10.525 "params": { 00:08:10.525 "name": "Nvme1", 00:08:10.525 "trtype": "tcp", 00:08:10.525 "traddr": "10.0.0.2", 00:08:10.525 "adrfam": "ipv4", 00:08:10.525 "trsvcid": "4420", 00:08:10.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:10.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:10.525 "hdgst": false, 00:08:10.525 "ddgst": false 00:08:10.525 }, 00:08:10.525 "method": "bdev_nvme_attach_controller" 00:08:10.525 }' 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:10.525 "params": { 00:08:10.525 "name": "Nvme1", 00:08:10.525 "trtype": "tcp", 00:08:10.525 "traddr": "10.0.0.2", 00:08:10.525 "adrfam": "ipv4", 00:08:10.525 "trsvcid": "4420", 00:08:10.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:10.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:10.525 "hdgst": false, 00:08:10.525 "ddgst": false 00:08:10.525 }, 00:08:10.525 "method": "bdev_nvme_attach_controller" 00:08:10.525 }' 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:10.525 "params": { 00:08:10.525 "name": "Nvme1", 00:08:10.525 "trtype": "tcp", 00:08:10.525 "traddr": "10.0.0.2", 00:08:10.525 "adrfam": "ipv4", 00:08:10.525 "trsvcid": "4420", 00:08:10.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:10.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:10.525 "hdgst": false, 00:08:10.525 "ddgst": false 00:08:10.525 }, 00:08:10.525 "method": "bdev_nvme_attach_controller" 00:08:10.525 }' 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:10.525 12:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:10.525 "params": { 00:08:10.525 "name": "Nvme1", 00:08:10.525 "trtype": "tcp", 00:08:10.525 "traddr": "10.0.0.2", 00:08:10.525 "adrfam": "ipv4", 00:08:10.525 "trsvcid": "4420", 00:08:10.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:10.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:10.525 "hdgst": false, 00:08:10.525 "ddgst": false 00:08:10.525 }, 00:08:10.525 "method": "bdev_nvme_attach_controller" 00:08:10.525 }' 00:08:10.525 [2024-11-04 12:12:44.925898] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:08:10.525 [2024-11-04 12:12:44.925953] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:10.525 [2024-11-04 12:12:44.926833] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:08:10.525 [2024-11-04 12:12:44.926881] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-04 12:12:44.926867] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:08:10.525 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:10.525 [2024-11-04 12:12:44.926913] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:10.525 [2024-11-04 12:12:44.927305] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:08:10.526 [2024-11-04 12:12:44.927348] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:10.526 [2024-11-04 12:12:45.073641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.786 [2024-11-04 12:12:45.102995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:10.786 [2024-11-04 12:12:45.118604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.786 [2024-11-04 12:12:45.147281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:10.786 [2024-11-04 12:12:45.174020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.786 [2024-11-04 12:12:45.202484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:10.786 [2024-11-04 12:12:45.203109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.786 [2024-11-04 12:12:45.231416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:10.786 Running I/O for 1 seconds... 00:08:10.786 Running I/O for 1 seconds... 00:08:10.786 Running I/O for 1 seconds... 00:08:11.046 Running I/O for 1 seconds... 00:08:11.989 11168.00 IOPS, 43.62 MiB/s 00:08:11.989 Latency(us) 00:08:11.989 [2024-11-04T11:12:46.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.989 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:11.989 Nvme1n1 : 1.01 11187.41 43.70 0.00 0.00 11378.71 5106.35 17913.17 00:08:11.989 [2024-11-04T11:12:46.559Z] =================================================================================================================== 00:08:11.989 [2024-11-04T11:12:46.559Z] Total : 11187.41 43.70 0.00 0.00 11378.71 5106.35 17913.17 00:08:11.989 14347.00 IOPS, 56.04 MiB/s 00:08:11.989 Latency(us) 00:08:11.989 [2024-11-04T11:12:46.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.989 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:11.989 Nvme1n1 : 1.01 14407.11 56.28 0.00 0.00 8857.33 4560.21 18022.40 00:08:11.989 [2024-11-04T11:12:46.559Z] =================================================================================================================== 00:08:11.989 [2024-11-04T11:12:46.559Z] Total : 14407.11 56.28 0.00 0.00 8857.33 4560.21 18022.40 00:08:11.989 10556.00 IOPS, 41.23 MiB/s 00:08:11.989 Latency(us) 00:08:11.989 [2024-11-04T11:12:46.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.989 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:11.989 Nvme1n1 : 1.00 10640.41 41.56 0.00 0.00 12009.02 2512.21 29491.20 00:08:11.989 [2024-11-04T11:12:46.559Z] =================================================================================================================== 00:08:11.989 [2024-11-04T11:12:46.559Z] Total : 10640.41 41.56 0.00 0.00 12009.02 2512.21 29491.20 00:08:11.989 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1472299 00:08:11.989 188152.00 IOPS, 734.97 MiB/s 00:08:11.989 Latency(us) 00:08:11.989 [2024-11-04T11:12:46.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.989 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:11.989 Nvme1n1 : 1.00 187778.62 733.51 0.00 0.00 677.53 302.08 1966.08 00:08:11.989 [2024-11-04T11:12:46.559Z] =================================================================================================================== 00:08:11.989 [2024-11-04T11:12:46.559Z] Total : 187778.62 733.51 0.00 0.00 677.53 302.08 1966.08 00:08:11.989 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1472301 00:08:11.989 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1472304 00:08:11.989 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:11.989 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.989 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:11.989 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.989 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:11.989 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:11.989 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:11.989 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:11.989 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:11.989 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:11.989 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:11.989 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:11.989 rmmod nvme_tcp 00:08:12.251 rmmod nvme_fabrics 00:08:12.251 rmmod nvme_keyring 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1471946 ']' 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1471946 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1471946 ']' 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1471946 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1471946 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1471946' 00:08:12.251 killing process with pid 1471946 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1471946 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1471946 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:12.251 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.252 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.252 12:12:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.800 12:12:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:14.800 00:08:14.800 real 0m12.255s 00:08:14.800 user 0m18.029s 00:08:14.800 sys 0m6.706s 00:08:14.800 12:12:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.800 12:12:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.800 ************************************ 00:08:14.800 END TEST nvmf_bdev_io_wait 00:08:14.800 ************************************ 00:08:14.800 12:12:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:14.800 12:12:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:14.800 12:12:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.800 12:12:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:14.800 ************************************ 00:08:14.800 START TEST nvmf_queue_depth 00:08:14.800 ************************************ 00:08:14.800 12:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:14.800 * Looking for test storage... 00:08:14.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:14.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.800 --rc genhtml_branch_coverage=1 00:08:14.800 --rc genhtml_function_coverage=1 00:08:14.800 --rc genhtml_legend=1 00:08:14.800 --rc geninfo_all_blocks=1 00:08:14.800 --rc geninfo_unexecuted_blocks=1 00:08:14.800 00:08:14.800 ' 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:14.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.800 --rc genhtml_branch_coverage=1 00:08:14.800 --rc genhtml_function_coverage=1 00:08:14.800 --rc genhtml_legend=1 00:08:14.800 --rc geninfo_all_blocks=1 00:08:14.800 --rc geninfo_unexecuted_blocks=1 00:08:14.800 00:08:14.800 ' 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:14.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.800 --rc genhtml_branch_coverage=1 00:08:14.800 --rc genhtml_function_coverage=1 00:08:14.800 --rc genhtml_legend=1 00:08:14.800 --rc geninfo_all_blocks=1 00:08:14.800 --rc geninfo_unexecuted_blocks=1 00:08:14.800 00:08:14.800 ' 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:14.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.800 --rc genhtml_branch_coverage=1 00:08:14.800 --rc genhtml_function_coverage=1 00:08:14.800 --rc genhtml_legend=1 00:08:14.800 --rc geninfo_all_blocks=1 00:08:14.800 --rc geninfo_unexecuted_blocks=1 00:08:14.800 00:08:14.800 ' 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.800 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:14.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:14.801 12:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:22.943 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:22.943 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:22.943 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:22.943 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.943 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:22.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:08:22.944 00:08:22.944 --- 10.0.0.2 ping statistics --- 00:08:22.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.944 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:08:22.944 00:08:22.944 --- 10.0.0.1 ping statistics --- 00:08:22.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.944 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1476845 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1476845 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1476845 ']' 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.944 12:12:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:22.944 [2024-11-04 12:12:56.615876] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:08:22.944 [2024-11-04 12:12:56.615948] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.944 [2024-11-04 12:12:56.708428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.944 [2024-11-04 12:12:56.758873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.944 [2024-11-04 12:12:56.758927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.944 [2024-11-04 12:12:56.758936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.944 [2024-11-04 12:12:56.758943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.944 [2024-11-04 12:12:56.758955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.944 [2024-11-04 12:12:56.759760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.944 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.944 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:22.944 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:22.944 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:22.944 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:22.944 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.944 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:22.944 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.944 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:22.944 [2024-11-04 12:12:57.469340] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.944 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.944 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:22.944 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.944 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:22.944 Malloc0 00:08:22.944 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.944 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:22.944 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.944 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.204 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.204 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:23.205 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.205 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.205 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.205 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.205 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.205 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.205 [2024-11-04 12:12:57.530598] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.205 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.205 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1477031 00:08:23.205 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:23.205 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:23.205 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1477031 /var/tmp/bdevperf.sock 00:08:23.205 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1477031 ']' 00:08:23.205 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:23.205 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.205 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:23.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:23.205 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.205 12:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:23.205 [2024-11-04 12:12:57.596890] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:08:23.205 [2024-11-04 12:12:57.596952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477031 ] 00:08:23.205 [2024-11-04 12:12:57.661384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.205 [2024-11-04 12:12:57.704644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.144 12:12:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.144 12:12:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:24.144 12:12:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:24.144 12:12:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.144 12:12:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.144 NVMe0n1 00:08:24.144 12:12:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.144 12:12:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:24.404 Running I/O for 10 seconds... 00:08:26.332 8358.00 IOPS, 32.65 MiB/s [2024-11-04T11:13:01.878Z] 9423.50 IOPS, 36.81 MiB/s [2024-11-04T11:13:02.906Z] 10204.33 IOPS, 39.86 MiB/s [2024-11-04T11:13:03.846Z] 10535.50 IOPS, 41.15 MiB/s [2024-11-04T11:13:04.790Z] 10852.40 IOPS, 42.39 MiB/s [2024-11-04T11:13:06.175Z] 10940.33 IOPS, 42.74 MiB/s [2024-11-04T11:13:07.115Z] 11096.14 IOPS, 43.34 MiB/s [2024-11-04T11:13:08.056Z] 11138.12 IOPS, 43.51 MiB/s [2024-11-04T11:13:08.997Z] 11223.00 IOPS, 43.84 MiB/s [2024-11-04T11:13:08.997Z] 11264.40 IOPS, 44.00 MiB/s 00:08:34.427 Latency(us) 00:08:34.427 [2024-11-04T11:13:08.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.427 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:34.427 Verification LBA range: start 0x0 length 0x4000 00:08:34.427 NVMe0n1 : 10.05 11314.26 44.20 0.00 0.00 90225.00 13434.88 80390.83 00:08:34.427 [2024-11-04T11:13:08.997Z] =================================================================================================================== 00:08:34.427 [2024-11-04T11:13:08.997Z] Total : 11314.26 44.20 0.00 0.00 90225.00 13434.88 80390.83 00:08:34.427 { 00:08:34.427 "results": [ 00:08:34.427 { 00:08:34.427 "job": "NVMe0n1", 00:08:34.427 "core_mask": "0x1", 00:08:34.427 "workload": "verify", 00:08:34.427 "status": "finished", 00:08:34.427 "verify_range": { 00:08:34.427 "start": 0, 00:08:34.427 "length": 16384 00:08:34.427 }, 00:08:34.427 "queue_depth": 1024, 00:08:34.427 "io_size": 4096, 00:08:34.427 "runtime": 10.046439, 00:08:34.427 "iops": 11314.257718580684, 00:08:34.427 "mibps": 44.1963192132058, 00:08:34.427 "io_failed": 0, 00:08:34.427 "io_timeout": 0, 00:08:34.427 "avg_latency_us": 90224.99902640439, 00:08:34.427 "min_latency_us": 13434.88, 00:08:34.427 "max_latency_us": 80390.82666666666 00:08:34.427 } 00:08:34.427 ], 00:08:34.427 "core_count": 1 00:08:34.427 } 00:08:34.427 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1477031 00:08:34.427 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1477031 ']' 00:08:34.427 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1477031 00:08:34.427 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:34.427 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.427 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1477031 00:08:34.427 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:34.427 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:34.427 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1477031' 00:08:34.427 killing process with pid 1477031 00:08:34.427 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1477031 00:08:34.427 Received shutdown signal, test time was about 10.000000 seconds 00:08:34.427 00:08:34.427 Latency(us) 00:08:34.427 [2024-11-04T11:13:08.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.427 [2024-11-04T11:13:08.997Z] =================================================================================================================== 00:08:34.427 [2024-11-04T11:13:08.997Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:34.427 12:13:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1477031 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:34.688 rmmod nvme_tcp 00:08:34.688 rmmod nvme_fabrics 00:08:34.688 rmmod nvme_keyring 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1476845 ']' 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1476845 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1476845 ']' 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1476845 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1476845 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1476845' 00:08:34.688 killing process with pid 1476845 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1476845 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1476845 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:34.688 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:34.689 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:34.689 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:34.689 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:08:34.689 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:34.689 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:08:34.689 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:34.689 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:34.689 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.689 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.689 12:13:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.233 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:37.233 00:08:37.233 real 0m22.380s 00:08:37.233 user 0m25.871s 00:08:37.233 sys 0m6.878s 00:08:37.233 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.233 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.233 ************************************ 00:08:37.233 END TEST nvmf_queue_depth 00:08:37.233 ************************************ 00:08:37.233 12:13:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:37.233 12:13:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:37.233 12:13:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.233 12:13:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.233 ************************************ 00:08:37.233 START TEST nvmf_target_multipath 00:08:37.233 ************************************ 00:08:37.233 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:37.233 * Looking for test storage... 00:08:37.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.233 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:37.233 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:37.233 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:37.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.234 --rc genhtml_branch_coverage=1 00:08:37.234 --rc genhtml_function_coverage=1 00:08:37.234 --rc genhtml_legend=1 00:08:37.234 --rc geninfo_all_blocks=1 00:08:37.234 --rc geninfo_unexecuted_blocks=1 00:08:37.234 00:08:37.234 ' 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:37.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.234 --rc genhtml_branch_coverage=1 00:08:37.234 --rc genhtml_function_coverage=1 00:08:37.234 --rc genhtml_legend=1 00:08:37.234 --rc geninfo_all_blocks=1 00:08:37.234 --rc geninfo_unexecuted_blocks=1 00:08:37.234 00:08:37.234 ' 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:37.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.234 --rc genhtml_branch_coverage=1 00:08:37.234 --rc genhtml_function_coverage=1 00:08:37.234 --rc genhtml_legend=1 00:08:37.234 --rc geninfo_all_blocks=1 00:08:37.234 --rc geninfo_unexecuted_blocks=1 00:08:37.234 00:08:37.234 ' 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:37.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.234 --rc genhtml_branch_coverage=1 00:08:37.234 --rc genhtml_function_coverage=1 00:08:37.234 --rc genhtml_legend=1 00:08:37.234 --rc geninfo_all_blocks=1 00:08:37.234 --rc geninfo_unexecuted_blocks=1 00:08:37.234 00:08:37.234 ' 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.234 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:37.235 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:37.235 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:37.235 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:37.235 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.235 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:37.235 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:37.235 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:37.235 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.235 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.235 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.235 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:37.235 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:37.235 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:37.235 12:13:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:45.374 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:45.374 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:45.374 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:45.374 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.374 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.375 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.375 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.375 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:45.375 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.375 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.375 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.375 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:45.375 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:45.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:08:45.375 00:08:45.375 --- 10.0.0.2 ping statistics --- 00:08:45.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.375 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:08:45.375 12:13:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:08:45.375 00:08:45.375 --- 10.0.0.1 ping statistics --- 00:08:45.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.375 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:45.375 only one NIC for nvmf test 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:45.375 rmmod nvme_tcp 00:08:45.375 rmmod nvme_fabrics 00:08:45.375 rmmod nvme_keyring 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.375 12:13:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:46.759 00:08:46.759 real 0m9.851s 00:08:46.759 user 0m2.104s 00:08:46.759 sys 0m5.671s 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:46.759 ************************************ 00:08:46.759 END TEST nvmf_target_multipath 00:08:46.759 ************************************ 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.759 12:13:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.021 ************************************ 00:08:47.021 START TEST nvmf_zcopy 00:08:47.021 ************************************ 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:47.021 * Looking for test storage... 00:08:47.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:47.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.021 --rc genhtml_branch_coverage=1 00:08:47.021 --rc genhtml_function_coverage=1 00:08:47.021 --rc genhtml_legend=1 00:08:47.021 --rc geninfo_all_blocks=1 00:08:47.021 --rc geninfo_unexecuted_blocks=1 00:08:47.021 00:08:47.021 ' 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:47.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.021 --rc genhtml_branch_coverage=1 00:08:47.021 --rc genhtml_function_coverage=1 00:08:47.021 --rc genhtml_legend=1 00:08:47.021 --rc geninfo_all_blocks=1 00:08:47.021 --rc geninfo_unexecuted_blocks=1 00:08:47.021 00:08:47.021 ' 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:47.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.021 --rc genhtml_branch_coverage=1 00:08:47.021 --rc genhtml_function_coverage=1 00:08:47.021 --rc genhtml_legend=1 00:08:47.021 --rc geninfo_all_blocks=1 00:08:47.021 --rc geninfo_unexecuted_blocks=1 00:08:47.021 00:08:47.021 ' 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:47.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.021 --rc genhtml_branch_coverage=1 00:08:47.021 --rc genhtml_function_coverage=1 00:08:47.021 --rc genhtml_legend=1 00:08:47.021 --rc geninfo_all_blocks=1 00:08:47.021 --rc geninfo_unexecuted_blocks=1 00:08:47.021 00:08:47.021 ' 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.021 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:47.022 12:13:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.160 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:55.160 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:55.160 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:55.160 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:55.160 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:55.160 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:55.160 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:55.160 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:55.160 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:55.160 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:55.160 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:55.160 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:55.161 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:55.161 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:55.161 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:55.161 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:55.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:08:55.161 00:08:55.161 --- 10.0.0.2 ping statistics --- 00:08:55.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.161 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:55.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:08:55.161 00:08:55.161 --- 10.0.0.1 ping statistics --- 00:08:55.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.161 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1487837 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1487837 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1487837 ']' 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.161 12:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.161 [2024-11-04 12:13:29.012321] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:08:55.161 [2024-11-04 12:13:29.012386] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.161 [2024-11-04 12:13:29.104617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.161 [2024-11-04 12:13:29.155356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.161 [2024-11-04 12:13:29.155416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.161 [2024-11-04 12:13:29.155425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.161 [2024-11-04 12:13:29.155432] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.161 [2024-11-04 12:13:29.155439] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.161 [2024-11-04 12:13:29.156248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.426 [2024-11-04 12:13:29.873370] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.426 [2024-11-04 12:13:29.889650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.426 malloc0 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:55.426 { 00:08:55.426 "params": { 00:08:55.426 "name": "Nvme$subsystem", 00:08:55.426 "trtype": "$TEST_TRANSPORT", 00:08:55.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.426 "adrfam": "ipv4", 00:08:55.426 "trsvcid": "$NVMF_PORT", 00:08:55.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.426 "hdgst": ${hdgst:-false}, 00:08:55.426 "ddgst": ${ddgst:-false} 00:08:55.426 }, 00:08:55.426 "method": "bdev_nvme_attach_controller" 00:08:55.426 } 00:08:55.426 EOF 00:08:55.426 )") 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:08:55.426 12:13:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:55.426 "params": { 00:08:55.426 "name": "Nvme1", 00:08:55.426 "trtype": "tcp", 00:08:55.426 "traddr": "10.0.0.2", 00:08:55.426 "adrfam": "ipv4", 00:08:55.426 "trsvcid": "4420", 00:08:55.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.426 "hdgst": false, 00:08:55.426 "ddgst": false 00:08:55.426 }, 00:08:55.426 "method": "bdev_nvme_attach_controller" 00:08:55.426 }' 00:08:55.426 [2024-11-04 12:13:29.980093] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:08:55.427 [2024-11-04 12:13:29.980160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488077 ] 00:08:55.687 [2024-11-04 12:13:30.047152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.687 [2024-11-04 12:13:30.093542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.947 Running I/O for 10 seconds... 00:08:58.269 6641.00 IOPS, 51.88 MiB/s [2024-11-04T11:13:33.777Z] 7849.00 IOPS, 61.32 MiB/s [2024-11-04T11:13:34.716Z] 8465.67 IOPS, 66.14 MiB/s [2024-11-04T11:13:35.657Z] 8782.00 IOPS, 68.61 MiB/s [2024-11-04T11:13:36.598Z] 8977.60 IOPS, 70.14 MiB/s [2024-11-04T11:13:37.539Z] 9104.17 IOPS, 71.13 MiB/s [2024-11-04T11:13:38.479Z] 9193.71 IOPS, 71.83 MiB/s [2024-11-04T11:13:39.859Z] 9262.88 IOPS, 72.37 MiB/s [2024-11-04T11:13:40.799Z] 9314.33 IOPS, 72.77 MiB/s [2024-11-04T11:13:40.799Z] 9345.80 IOPS, 73.01 MiB/s 00:09:06.229 Latency(us) 00:09:06.229 [2024-11-04T11:13:40.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.229 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:06.229 Verification LBA range: start 0x0 length 0x1000 00:09:06.229 Nvme1n1 : 10.01 9346.74 73.02 0.00 0.00 13643.13 1870.51 28398.93 00:09:06.229 [2024-11-04T11:13:40.799Z] =================================================================================================================== 00:09:06.229 [2024-11-04T11:13:40.799Z] Total : 9346.74 73.02 0.00 0.00 13643.13 1870.51 28398.93 00:09:06.229 12:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1490108 00:09:06.229 12:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:06.229 12:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.229 12:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:06.229 12:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:06.229 12:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:06.229 12:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:06.229 12:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:06.229 12:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:06.230 { 00:09:06.230 "params": { 00:09:06.230 "name": "Nvme$subsystem", 00:09:06.230 "trtype": "$TEST_TRANSPORT", 00:09:06.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.230 "adrfam": "ipv4", 00:09:06.230 "trsvcid": "$NVMF_PORT", 00:09:06.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.230 "hdgst": ${hdgst:-false}, 00:09:06.230 "ddgst": ${ddgst:-false} 00:09:06.230 }, 00:09:06.230 "method": "bdev_nvme_attach_controller" 00:09:06.230 } 00:09:06.230 EOF 00:09:06.230 )") 00:09:06.230 [2024-11-04 12:13:40.551880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.551912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 12:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:06.230 12:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:06.230 [2024-11-04 12:13:40.559861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.559870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 12:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:06.230 12:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:06.230 "params": { 00:09:06.230 "name": "Nvme1", 00:09:06.230 "trtype": "tcp", 00:09:06.230 "traddr": "10.0.0.2", 00:09:06.230 "adrfam": "ipv4", 00:09:06.230 "trsvcid": "4420", 00:09:06.230 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.230 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:06.230 "hdgst": false, 00:09:06.230 "ddgst": false 00:09:06.230 }, 00:09:06.230 "method": "bdev_nvme_attach_controller" 00:09:06.230 }' 00:09:06.230 [2024-11-04 12:13:40.567879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.567887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.575899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.575906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.583921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.583928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.595950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.595958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.603971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.603978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.608679] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:09:06.230 [2024-11-04 12:13:40.608741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490108 ] 00:09:06.230 [2024-11-04 12:13:40.611990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.611998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.620011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.620019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.628030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.628037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.636051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.636058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.644071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.644078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.652090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.652098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.660111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.660118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.668132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.668139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.668792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.230 [2024-11-04 12:13:40.676154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.676162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.684172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.684180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.692192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.692200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.700213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.700222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.704541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.230 [2024-11-04 12:13:40.708234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.708242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.716259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.716269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.724281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.724292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.732298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.732309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.740319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.740330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.748338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.748346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.756359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.756367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.764378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.764385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.772399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.772406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.780420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.780427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.788456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.788474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.230 [2024-11-04 12:13:40.796466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.230 [2024-11-04 12:13:40.796476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-11-04 12:13:40.804484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-11-04 12:13:40.804493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-11-04 12:13:40.812506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-11-04 12:13:40.812516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-11-04 12:13:40.820527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-11-04 12:13:40.820537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.491 [2024-11-04 12:13:40.828548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.491 [2024-11-04 12:13:40.828558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:40.836568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:40.836577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:40.884200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:40.884215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:40.888729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:40.888739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 Running I/O for 5 seconds... 00:09:06.492 [2024-11-04 12:13:40.896754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:40.896761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:40.907842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:40.907858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:40.915820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:40.915836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:40.924732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:40.924752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:40.933250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:40.933266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:40.942069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:40.942085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:40.950618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:40.950633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:40.959268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:40.959284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:40.967790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:40.967804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:40.976400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:40.976416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:40.985424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:40.985439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:40.993946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:40.993961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:41.003418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:41.003433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:41.011995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:41.012010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:41.020891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:41.020906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:41.030055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:41.030070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:41.039456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:41.039470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:41.047970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:41.047985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.492 [2024-11-04 12:13:41.056790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.492 [2024-11-04 12:13:41.056805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.065162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.065177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.074085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.074099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.082631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.082645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.091175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.091190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.099952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.099967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.108176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.108191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.116888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.116903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.125617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.125631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.134543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.134558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.143226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.143241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.152048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.152063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.160439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.160454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.169078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.169092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.178083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.178097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.186915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.186929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.196046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.196060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.204688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.204703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.213896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.213910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.222434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.222448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.231549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.231563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.240378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.240393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.249456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.249470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.258223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.258238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.267407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.267422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.275635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.275649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.284468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.284482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.293403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.293417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.302327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.302342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.311186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.311204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.753 [2024-11-04 12:13:41.320169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.753 [2024-11-04 12:13:41.320183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.329000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.329016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.338163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.338178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.346674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.346688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.355019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.355033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.364176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.364191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.372362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.372377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.380932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.380946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.389634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.389648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.398798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.398813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.407585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.407599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.416784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.416798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.425408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.425423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.434312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.434327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.442552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.442567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.451399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.451414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.460277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.460292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.469070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.469085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.477842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.477864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.486531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.486546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.495212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.495227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.504222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.504237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.512878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.512893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.521551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.521566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.529914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.529929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.538711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.538727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.547379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.547394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.556499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.556515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.565643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.565658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.014 [2024-11-04 12:13:41.573931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.014 [2024-11-04 12:13:41.573946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.582794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.582809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.591667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.591682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.600190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.600205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.608882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.608897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.617842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.617858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.626686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.626701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.635259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.635274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.644004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.644022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.652881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.652896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.661369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.661384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.670355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.670370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.678723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.678737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.687595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.687610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.695839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.695854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.704400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.704415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.713306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.713321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.722134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.722150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.731125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.731141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.739823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.739838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.748592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.748607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.757008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.757022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.766072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.766086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.774357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.774371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.782971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.782985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.792130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.792145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.801267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.801281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.810411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.810429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.819277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.819292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.828265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.828281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.276 [2024-11-04 12:13:41.837031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.276 [2024-11-04 12:13:41.837046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:41.845725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:41.845740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:41.854909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:41.854924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:41.863931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:41.863947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:41.872676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:41.872691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:41.881466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:41.881481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:41.890326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:41.890341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:41.898761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:41.898776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 19017.00 IOPS, 148.57 MiB/s [2024-11-04T11:13:42.107Z] [2024-11-04 12:13:41.908287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:41.908302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:41.916361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:41.916376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:41.925294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:41.925308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:41.934488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:41.934504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:41.943725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:41.943741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:41.952189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:41.952204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:41.961237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:41.961252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:41.969874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:41.969889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:41.979015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:41.979030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:41.987461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:41.987477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:41.996208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:41.996222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:42.005456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:42.005471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:42.013964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:42.013979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:42.022227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:42.022242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:42.031337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:42.031352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:42.040690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:42.040705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:42.049591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:42.049606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:42.058188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:42.058203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:42.066703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:42.066718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:42.075897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:42.075912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:42.084380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:42.084395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:42.093290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:42.093305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.537 [2024-11-04 12:13:42.102378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.537 [2024-11-04 12:13:42.102394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.798 [2024-11-04 12:13:42.110897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.798 [2024-11-04 12:13:42.110912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.798 [2024-11-04 12:13:42.119688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.798 [2024-11-04 12:13:42.119703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.798 [2024-11-04 12:13:42.128106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.798 [2024-11-04 12:13:42.128120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.798 [2024-11-04 12:13:42.136852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.798 [2024-11-04 12:13:42.136867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.798 [2024-11-04 12:13:42.145648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.798 [2024-11-04 12:13:42.145663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.798 [2024-11-04 12:13:42.154119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.798 [2024-11-04 12:13:42.154134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.798 [2024-11-04 12:13:42.162788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.798 [2024-11-04 12:13:42.162804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.798 [2024-11-04 12:13:42.171611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.798 [2024-11-04 12:13:42.171626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.798 [2024-11-04 12:13:42.180679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.798 [2024-11-04 12:13:42.180695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.798 [2024-11-04 12:13:42.189974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.798 [2024-11-04 12:13:42.189989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.798 [2024-11-04 12:13:42.199086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.199101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-11-04 12:13:42.208138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.208153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-11-04 12:13:42.216980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.216996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-11-04 12:13:42.226351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.226366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-11-04 12:13:42.234866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.234881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-11-04 12:13:42.243360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.243376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-11-04 12:13:42.251858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.251873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-11-04 12:13:42.260410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.260425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-11-04 12:13:42.268889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.268904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-11-04 12:13:42.277368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.277383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-11-04 12:13:42.286267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.286283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-11-04 12:13:42.295194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.295209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-11-04 12:13:42.303211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.303226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-11-04 12:13:42.312373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.312387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-11-04 12:13:42.321403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.321418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-11-04 12:13:42.329926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.329942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-11-04 12:13:42.338899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.338914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-11-04 12:13:42.348030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.348045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-11-04 12:13:42.356521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.356536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.799 [2024-11-04 12:13:42.365312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.799 [2024-11-04 12:13:42.365327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-11-04 12:13:42.374477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-11-04 12:13:42.374493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-11-04 12:13:42.382913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-11-04 12:13:42.382928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-11-04 12:13:42.391320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-11-04 12:13:42.391335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-11-04 12:13:42.400604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-11-04 12:13:42.400621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-11-04 12:13:42.409395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-11-04 12:13:42.409410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-11-04 12:13:42.417515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-11-04 12:13:42.417530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-11-04 12:13:42.426541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-11-04 12:13:42.426557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-11-04 12:13:42.435489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-11-04 12:13:42.435504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.059 [2024-11-04 12:13:42.444338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.059 [2024-11-04 12:13:42.444353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.453511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.453526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.462053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.462069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.471102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.471121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.480178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.480193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.488683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.488699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.497786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.497801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.506375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.506390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.515145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.515160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.524029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.524045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.533314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.533329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.541599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.541614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.550341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.550356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.559125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.559140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.567803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.567818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.576227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.576242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.584970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.584985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.593714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.593729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.602262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.602277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.611131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.611147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.060 [2024-11-04 12:13:42.619955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.060 [2024-11-04 12:13:42.619970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.628593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.628608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.637409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.637428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.646424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.646440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.655503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.655518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.664012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.664027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.672818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.672833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.681554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.681570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.689966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.689981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.698736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.698755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.707329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.707344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.716372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.716387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.724449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.724463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.733175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.733190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.742514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.742529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.750998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.751013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.759665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.759681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.768654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.768669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.777666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.777681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.786255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.786270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.794958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.794973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.803607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.803625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.811528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.811543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.820826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.820841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.828873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.828889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.837757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.837772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.846806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.846822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.855466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.855481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.864221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.864237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.872971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.872985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.321 [2024-11-04 12:13:42.881981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.321 [2024-11-04 12:13:42.881996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:42.890641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:42.890656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:42.900003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:42.900018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 19101.00 IOPS, 149.23 MiB/s [2024-11-04T11:13:43.152Z] [2024-11-04 12:13:42.908182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:42.908197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:42.916979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:42.916994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:42.925446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:42.925461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:42.933902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:42.933917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:42.942455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:42.942469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:42.951587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:42.951602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:42.960104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:42.960119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:42.968793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:42.968808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:42.977410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:42.977424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:42.985961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:42.985976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:42.994529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:42.994544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:43.003281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:43.003296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:43.011855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:43.011870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:43.020286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:43.020301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:43.029090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:43.029104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:43.037753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:43.037768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:43.046490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:43.046505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:43.055291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:43.055305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:43.063611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:43.063626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:43.072717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:43.072732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:43.081767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:43.081782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:43.090090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:43.090105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:43.098838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:43.098854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:43.107510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:43.107525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:43.116260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:43.116275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:43.124973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:43.124989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:43.132768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:43.132783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:43.141595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:43.141611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.582 [2024-11-04 12:13:43.150172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.582 [2024-11-04 12:13:43.150187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.158464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.158478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.167232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.167248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.176084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.176099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.184727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.184741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.193768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.193782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.202404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.202419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.211522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.211536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.220575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.220590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.229251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.229266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.238384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.238398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.247583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.247597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.255792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.255806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.264943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.264958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.273613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.273628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.282510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.282524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.291478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.291492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.300365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.300380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.309407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.309422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.317377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.317392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.326339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.326353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.334926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.334940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.343756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.343771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.352469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.352484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.361341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.361356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.369990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.370005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.379162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.843 [2024-11-04 12:13:43.379177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.843 [2024-11-04 12:13:43.388037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.844 [2024-11-04 12:13:43.388052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.844 [2024-11-04 12:13:43.395890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.844 [2024-11-04 12:13:43.395905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.844 [2024-11-04 12:13:43.404997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.844 [2024-11-04 12:13:43.405012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.414079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.414094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.422966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.422981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.431187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.431201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.440022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.440036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.449133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.449147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.457949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.457967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.466789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.466804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.475536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.475551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.484540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.484555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.493203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.493218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.502123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.502138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.510663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.510678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.519647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.519661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.527885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.527900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.536621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.536637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.544963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.544978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.553892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.553906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.562392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.562407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.571202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.571217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.579518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.579533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.588775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.588790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.597649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.597663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.606044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.606059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.614867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.614882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.104 [2024-11-04 12:13:43.623815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.104 [2024-11-04 12:13:43.623833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.105 [2024-11-04 12:13:43.632917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.105 [2024-11-04 12:13:43.632931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.105 [2024-11-04 12:13:43.641335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.105 [2024-11-04 12:13:43.641350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.105 [2024-11-04 12:13:43.649879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.105 [2024-11-04 12:13:43.649893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.105 [2024-11-04 12:13:43.658716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.105 [2024-11-04 12:13:43.658731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.105 [2024-11-04 12:13:43.666924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.105 [2024-11-04 12:13:43.666938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.675599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.675614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.683884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.683899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.692957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.692972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.702200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.702214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.710948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.710963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.719419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.719434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.728175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.728190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.736659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.736674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.745384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.745399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.754204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.754219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.762921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.762936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.771294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.771309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.779925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.779940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.789003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.789022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.797358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.797373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.806028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.806043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.814602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.814617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.822989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.823004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.831523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.831538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.840475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.840490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.849335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.849351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.857887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.857902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.866643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.866658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.875174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.875190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.883951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.883966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.892420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.892436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.901548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.901563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 19123.00 IOPS, 149.40 MiB/s [2024-11-04T11:13:43.935Z] [2024-11-04 12:13:43.909652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.909667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.918144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.918159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.365 [2024-11-04 12:13:43.927270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.365 [2024-11-04 12:13:43.927285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:43.936379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:43.936394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:43.945450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:43.945464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:43.953977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:43.953992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:43.962594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:43.962609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:43.971261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:43.971276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:43.980039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:43.980054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:43.988882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:43.988897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:43.998035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:43.998051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.006034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.006049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.014852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.014867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.023667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.023683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.032044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.032059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.040985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.041000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.049793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.049808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.058644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.058659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.067628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.067644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.076294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.076309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.085265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.085281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.093823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.093838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.102909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.102924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.111935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.111950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.120820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.120835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.129850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.129866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.138920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.138935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.147790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.147805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.156347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.156362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.165625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.165640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.174457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.174472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.183786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.183801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.626 [2024-11-04 12:13:44.192442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.626 [2024-11-04 12:13:44.192457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.201360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.201375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.209995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.210010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.218704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.218719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.227310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.227325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.236843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.236858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.245708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.245723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.254307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.254322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.263254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.263270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.272088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.272103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.280955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.280970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.289449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.289465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.298124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.298139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.307407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.307422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.316636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.316652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.325220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.325235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.334269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.334284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.342784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.342799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.351865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.351880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.360354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.360369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.369579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.369594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.378190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.378205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.387057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.387072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.395883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.395898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.404911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.404927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.413772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.413787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.422825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.422840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.431456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.431471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.440067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.440082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.887 [2024-11-04 12:13:44.448843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.887 [2024-11-04 12:13:44.448858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.457568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.457583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.466783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.466798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.475522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.475537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.484552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.484568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.493123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.493138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.502274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.502289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.511243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.511258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.520050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.520065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.529296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.529311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.538504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.538519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.546993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.547008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.555232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.555247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.563866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.563881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.572917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.572932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.581318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.581332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.590497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.590512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.598803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.598817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.607753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.607768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.616788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.616807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.625448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.625463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.633849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.633864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.643098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.643112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.651319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.651334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.659896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.659911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.668478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.668492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.677222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.677237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.685939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.685953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.694822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.694837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.703342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.703357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.148 [2024-11-04 12:13:44.712322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.148 [2024-11-04 12:13:44.712337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.721266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.721281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.729955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.729970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.738618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.738632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.747090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.747105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.756129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.756144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.764911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.764925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.773190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.773205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.781931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.781949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.790196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.790210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.798953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.798968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.807912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.807927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.816555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.816570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.824780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.824795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.832978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.832993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.841875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.841889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.850303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.850318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.863394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.863409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.871065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.871080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.880123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.880138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.888815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.888829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.897602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.897617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.907035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.907050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 19142.00 IOPS, 149.55 MiB/s [2024-11-04T11:13:44.980Z] [2024-11-04 12:13:44.915049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.915064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.923615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.923630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.931970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.931985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.941202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.941217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.950062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.950080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.958812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.958826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.967062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.967077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.410 [2024-11-04 12:13:44.975993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.410 [2024-11-04 12:13:44.976008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:44.984791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:44.984806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:44.993972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:44.993986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.002881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.002896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.011204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.011219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.020297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.020311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.029409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.029424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.038250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.038265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.047159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.047173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.056012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.056027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.064861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.064875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.073614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.073629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.082412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.082427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.091239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.091254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.099694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.099709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.108388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.108404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.117160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.117174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.126532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.126547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.135777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.135792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.144130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.144145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.152413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.152428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.161014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.161029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.169497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.169512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.178419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.178434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.186853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.186868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.195335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.195350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.204282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.204296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.212918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.212933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.221575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.221590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.230019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.230034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.672 [2024-11-04 12:13:45.238834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.672 [2024-11-04 12:13:45.238848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.933 [2024-11-04 12:13:45.247918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.933 [2024-11-04 12:13:45.247934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.933 [2024-11-04 12:13:45.256392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.933 [2024-11-04 12:13:45.256407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.933 [2024-11-04 12:13:45.265375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.933 [2024-11-04 12:13:45.265390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.933 [2024-11-04 12:13:45.273940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.273956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.282352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.282367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.291489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.291504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.300265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.300280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.308567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.308582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.317204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.317219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.325942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.325957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.333955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.333970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.343085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.343100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.351362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.351378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.360132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.360147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.368505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.368520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.377369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.377384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.386309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.386325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.395484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.395499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.403337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.403352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.411845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.411859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.420668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.420683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.429195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.429210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.438343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.438358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.447187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.447202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.456098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.456113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.464877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.464891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.473418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.473433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.482055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.482070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.490393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.490407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.934 [2024-11-04 12:13:45.499344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.934 [2024-11-04 12:13:45.499359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.195 [2024-11-04 12:13:45.508128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.195 [2024-11-04 12:13:45.508143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.195 [2024-11-04 12:13:45.516851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.195 [2024-11-04 12:13:45.516866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.195 [2024-11-04 12:13:45.525267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.195 [2024-11-04 12:13:45.525282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.195 [2024-11-04 12:13:45.534194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.195 [2024-11-04 12:13:45.534209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.195 [2024-11-04 12:13:45.543407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.195 [2024-11-04 12:13:45.543423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.195 [2024-11-04 12:13:45.552069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.195 [2024-11-04 12:13:45.552084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.195 [2024-11-04 12:13:45.560660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.195 [2024-11-04 12:13:45.560675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.195 [2024-11-04 12:13:45.569510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.195 [2024-11-04 12:13:45.569525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.195 [2024-11-04 12:13:45.578276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.195 [2024-11-04 12:13:45.578292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.195 [2024-11-04 12:13:45.587053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.195 [2024-11-04 12:13:45.587068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.195 [2024-11-04 12:13:45.595211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.195 [2024-11-04 12:13:45.595226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.195 [2024-11-04 12:13:45.603963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.195 [2024-11-04 12:13:45.603978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.195 [2024-11-04 12:13:45.612509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.195 [2024-11-04 12:13:45.612524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.195 [2024-11-04 12:13:45.621481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.195 [2024-11-04 12:13:45.621496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-11-04 12:13:45.630224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-11-04 12:13:45.630239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-11-04 12:13:45.638841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-11-04 12:13:45.638856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-11-04 12:13:45.647540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-11-04 12:13:45.647555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-11-04 12:13:45.656549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-11-04 12:13:45.656564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-11-04 12:13:45.665174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-11-04 12:13:45.665190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-11-04 12:13:45.674475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-11-04 12:13:45.674491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-11-04 12:13:45.682959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-11-04 12:13:45.682974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-11-04 12:13:45.692020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-11-04 12:13:45.692036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-11-04 12:13:45.701142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-11-04 12:13:45.701157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-11-04 12:13:45.710031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-11-04 12:13:45.710046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-11-04 12:13:45.723571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-11-04 12:13:45.723588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-11-04 12:13:45.731991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-11-04 12:13:45.732006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-11-04 12:13:45.740702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-11-04 12:13:45.740718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-11-04 12:13:45.749363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-11-04 12:13:45.749378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.196 [2024-11-04 12:13:45.758094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.196 [2024-11-04 12:13:45.758110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.456 [2024-11-04 12:13:45.766831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.456 [2024-11-04 12:13:45.766847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.456 [2024-11-04 12:13:45.775327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.456 [2024-11-04 12:13:45.775346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.456 [2024-11-04 12:13:45.784361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.456 [2024-11-04 12:13:45.784377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.456 [2024-11-04 12:13:45.793019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.456 [2024-11-04 12:13:45.793034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.456 [2024-11-04 12:13:45.801489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.456 [2024-11-04 12:13:45.801505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.456 [2024-11-04 12:13:45.810392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.456 [2024-11-04 12:13:45.810408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.456 [2024-11-04 12:13:45.819081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.456 [2024-11-04 12:13:45.819096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.456 [2024-11-04 12:13:45.828090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.828105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:45.837293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.837309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:45.846445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.846460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:45.854728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.854744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:45.863945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.863960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:45.872341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.872357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:45.881392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.881407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:45.890571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.890586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:45.899998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.900013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:45.909237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.909252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 19154.40 IOPS, 149.64 MiB/s [2024-11-04T11:13:46.027Z] [2024-11-04 12:13:45.914797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.914812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 00:09:11.457 Latency(us) 00:09:11.457 [2024-11-04T11:13:46.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.457 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:11.457 Nvme1n1 : 5.01 19157.04 149.66 0.00 0.00 6675.72 2703.36 16820.91 00:09:11.457 [2024-11-04T11:13:46.027Z] =================================================================================================================== 00:09:11.457 [2024-11-04T11:13:46.027Z] Total : 19157.04 149.66 0.00 0.00 6675.72 2703.36 16820.91 00:09:11.457 [2024-11-04 12:13:45.922820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.922832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:45.930838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.930849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:45.938862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.938873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:45.946882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.946893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:45.954903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.954913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:45.962920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.962929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:45.970938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.970946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:45.978958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.978967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:45.986979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.986987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:45.994998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:45.995005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:46.003019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:46.003028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:46.011039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:46.011049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.457 [2024-11-04 12:13:46.019060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.457 [2024-11-04 12:13:46.019069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1490108) - No such process 00:09:11.717 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1490108 00:09:11.717 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.717 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.718 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.718 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.718 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:11.718 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.718 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.718 delay0 00:09:11.718 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.718 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:11.718 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.718 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.718 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.718 12:13:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:11.718 [2024-11-04 12:13:46.152915] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:19.852 Initializing NVMe Controllers 00:09:19.852 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:19.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:19.852 Initialization complete. Launching workers. 00:09:19.852 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 229, failed: 30809 00:09:19.852 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 30907, failed to submit 131 00:09:19.852 success 30835, unsuccessful 72, failed 0 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:19.852 rmmod nvme_tcp 00:09:19.852 rmmod nvme_fabrics 00:09:19.852 rmmod nvme_keyring 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1487837 ']' 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1487837 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1487837 ']' 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1487837 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1487837 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1487837' 00:09:19.852 killing process with pid 1487837 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1487837 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1487837 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.852 12:13:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:21.235 00:09:21.235 real 0m34.191s 00:09:21.235 user 0m45.828s 00:09:21.235 sys 0m11.512s 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:21.235 ************************************ 00:09:21.235 END TEST nvmf_zcopy 00:09:21.235 ************************************ 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.235 ************************************ 00:09:21.235 START TEST nvmf_nmic 00:09:21.235 ************************************ 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:21.235 * Looking for test storage... 00:09:21.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:21.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.235 --rc genhtml_branch_coverage=1 00:09:21.235 --rc genhtml_function_coverage=1 00:09:21.235 --rc genhtml_legend=1 00:09:21.235 --rc geninfo_all_blocks=1 00:09:21.235 --rc geninfo_unexecuted_blocks=1 00:09:21.235 00:09:21.235 ' 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:21.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.235 --rc genhtml_branch_coverage=1 00:09:21.235 --rc genhtml_function_coverage=1 00:09:21.235 --rc genhtml_legend=1 00:09:21.235 --rc geninfo_all_blocks=1 00:09:21.235 --rc geninfo_unexecuted_blocks=1 00:09:21.235 00:09:21.235 ' 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:21.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.235 --rc genhtml_branch_coverage=1 00:09:21.235 --rc genhtml_function_coverage=1 00:09:21.235 --rc genhtml_legend=1 00:09:21.235 --rc geninfo_all_blocks=1 00:09:21.235 --rc geninfo_unexecuted_blocks=1 00:09:21.235 00:09:21.235 ' 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:21.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.235 --rc genhtml_branch_coverage=1 00:09:21.235 --rc genhtml_function_coverage=1 00:09:21.235 --rc genhtml_legend=1 00:09:21.235 --rc geninfo_all_blocks=1 00:09:21.235 --rc geninfo_unexecuted_blocks=1 00:09:21.235 00:09:21.235 ' 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.235 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.496 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:21.497 12:13:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:29.634 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.634 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:29.635 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:29.635 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:29.635 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.635 12:14:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:29.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:09:29.635 00:09:29.635 --- 10.0.0.2 ping statistics --- 00:09:29.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.635 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:09:29.635 00:09:29.635 --- 10.0.0.1 ping statistics --- 00:09:29.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.635 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1497100 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1497100 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1497100 ']' 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.635 12:14:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.635 [2024-11-04 12:14:03.247852] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:09:29.635 [2024-11-04 12:14:03.247907] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.635 [2024-11-04 12:14:03.319374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.635 [2024-11-04 12:14:03.360785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.635 [2024-11-04 12:14:03.360824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.635 [2024-11-04 12:14:03.360833] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.635 [2024-11-04 12:14:03.360840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.635 [2024-11-04 12:14:03.360846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.635 [2024-11-04 12:14:03.362407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.635 [2024-11-04 12:14:03.362524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.635 [2024-11-04 12:14:03.362682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.635 [2024-11-04 12:14:03.362682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.635 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:29.635 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:29.635 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:29.635 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:29.635 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.635 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.635 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:29.635 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.636 [2024-11-04 12:14:04.096915] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.636 Malloc0 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.636 [2024-11-04 12:14:04.165946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:29.636 test case1: single bdev can't be used in multiple subsystems 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.636 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.636 [2024-11-04 12:14:04.201873] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:29.636 [2024-11-04 12:14:04.201893] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:29.636 [2024-11-04 12:14:04.201901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.897 request: 00:09:29.897 { 00:09:29.897 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:29.897 "namespace": { 00:09:29.897 "bdev_name": "Malloc0", 00:09:29.897 "no_auto_visible": false 00:09:29.897 }, 00:09:29.897 "method": "nvmf_subsystem_add_ns", 00:09:29.897 "req_id": 1 00:09:29.897 } 00:09:29.897 Got JSON-RPC error response 00:09:29.897 response: 00:09:29.897 { 00:09:29.897 "code": -32602, 00:09:29.897 "message": "Invalid parameters" 00:09:29.897 } 00:09:29.897 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:29.897 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:29.897 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:29.897 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:29.897 Adding namespace failed - expected result. 00:09:29.897 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:29.897 test case2: host connect to nvmf target in multiple paths 00:09:29.897 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:29.897 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.897 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.897 [2024-11-04 12:14:04.214009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:29.897 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.897 12:14:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:31.382 12:14:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:32.763 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:32.763 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:32.763 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:32.763 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:32.763 12:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:34.676 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:34.676 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:34.676 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:34.676 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:34.676 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:34.676 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:34.676 12:14:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:34.963 [global] 00:09:34.963 thread=1 00:09:34.963 invalidate=1 00:09:34.963 rw=write 00:09:34.963 time_based=1 00:09:34.963 runtime=1 00:09:34.963 ioengine=libaio 00:09:34.963 direct=1 00:09:34.963 bs=4096 00:09:34.963 iodepth=1 00:09:34.963 norandommap=0 00:09:34.963 numjobs=1 00:09:34.963 00:09:34.963 verify_dump=1 00:09:34.963 verify_backlog=512 00:09:34.963 verify_state_save=0 00:09:34.963 do_verify=1 00:09:34.963 verify=crc32c-intel 00:09:34.963 [job0] 00:09:34.963 filename=/dev/nvme0n1 00:09:34.963 Could not set queue depth (nvme0n1) 00:09:35.232 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.232 fio-3.35 00:09:35.232 Starting 1 thread 00:09:36.172 00:09:36.172 job0: (groupid=0, jobs=1): err= 0: pid=1498405: Mon Nov 4 12:14:10 2024 00:09:36.172 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:36.172 slat (nsec): min=25105, max=59184, avg=26493.49, stdev=3725.05 00:09:36.172 clat (usec): min=793, max=1837, avg=1009.50, stdev=70.40 00:09:36.172 lat (usec): min=819, max=1863, avg=1036.00, stdev=70.19 00:09:36.172 clat percentiles (usec): 00:09:36.172 | 1.00th=[ 840], 5.00th=[ 889], 10.00th=[ 930], 20.00th=[ 971], 00:09:36.173 | 30.00th=[ 988], 40.00th=[ 1004], 50.00th=[ 1012], 60.00th=[ 1029], 00:09:36.173 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1074], 95.00th=[ 1090], 00:09:36.173 | 99.00th=[ 1139], 99.50th=[ 1205], 99.90th=[ 1844], 99.95th=[ 1844], 00:09:36.173 | 99.99th=[ 1844] 00:09:36.173 write: IOPS=698, BW=2793KiB/s (2860kB/s)(2796KiB/1001msec); 0 zone resets 00:09:36.173 slat (usec): min=9, max=25924, avg=65.95, stdev=979.51 00:09:36.173 clat (usec): min=180, max=2526, avg=591.65, stdev=133.74 00:09:36.173 lat (usec): min=190, max=26673, avg=657.60, stdev=994.96 00:09:36.173 clat percentiles (usec): 00:09:36.173 | 1.00th=[ 289], 5.00th=[ 371], 10.00th=[ 420], 20.00th=[ 502], 00:09:36.173 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:09:36.173 | 70.00th=[ 660], 80.00th=[ 685], 90.00th=[ 717], 95.00th=[ 734], 00:09:36.173 | 99.00th=[ 807], 99.50th=[ 824], 99.90th=[ 2540], 99.95th=[ 2540], 00:09:36.173 | 99.99th=[ 2540] 00:09:36.173 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:36.173 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:36.173 lat (usec) : 250=0.08%, 500=11.40%, 750=44.67%, 1000=18.00% 00:09:36.173 lat (msec) : 2=25.76%, 4=0.08% 00:09:36.173 cpu : usr=1.60%, sys=3.60%, ctx=1215, majf=0, minf=1 00:09:36.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:36.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.173 issued rwts: total=512,699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:36.173 00:09:36.173 Run status group 0 (all jobs): 00:09:36.173 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:09:36.173 WRITE: bw=2793KiB/s (2860kB/s), 2793KiB/s-2793KiB/s (2860kB/s-2860kB/s), io=2796KiB (2863kB), run=1001-1001msec 00:09:36.173 00:09:36.173 Disk stats (read/write): 00:09:36.173 nvme0n1: ios=564/535, merge=0/0, ticks=1020/310, in_queue=1330, util=98.60% 00:09:36.433 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:36.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:36.433 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:36.433 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:36.433 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:36.433 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.433 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:36.433 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.433 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:36.434 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:36.434 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:36.434 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:36.434 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:36.434 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:36.434 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:36.434 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:36.434 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:36.434 rmmod nvme_tcp 00:09:36.434 rmmod nvme_fabrics 00:09:36.434 rmmod nvme_keyring 00:09:36.434 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:36.434 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:36.434 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:36.434 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1497100 ']' 00:09:36.434 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1497100 00:09:36.434 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1497100 ']' 00:09:36.434 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1497100 00:09:36.434 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:36.434 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:36.434 12:14:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1497100 00:09:36.694 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:36.694 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:36.694 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1497100' 00:09:36.694 killing process with pid 1497100 00:09:36.694 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1497100 00:09:36.694 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1497100 00:09:36.694 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:36.694 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:36.694 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:36.694 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:36.694 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:09:36.694 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:36.694 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:09:36.694 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:36.694 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:36.694 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.694 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.694 12:14:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:39.239 00:09:39.239 real 0m17.671s 00:09:39.239 user 0m47.917s 00:09:39.239 sys 0m6.411s 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.239 ************************************ 00:09:39.239 END TEST nvmf_nmic 00:09:39.239 ************************************ 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:39.239 ************************************ 00:09:39.239 START TEST nvmf_fio_target 00:09:39.239 ************************************ 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:39.239 * Looking for test storage... 00:09:39.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:39.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.239 --rc genhtml_branch_coverage=1 00:09:39.239 --rc genhtml_function_coverage=1 00:09:39.239 --rc genhtml_legend=1 00:09:39.239 --rc geninfo_all_blocks=1 00:09:39.239 --rc geninfo_unexecuted_blocks=1 00:09:39.239 00:09:39.239 ' 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:39.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.239 --rc genhtml_branch_coverage=1 00:09:39.239 --rc genhtml_function_coverage=1 00:09:39.239 --rc genhtml_legend=1 00:09:39.239 --rc geninfo_all_blocks=1 00:09:39.239 --rc geninfo_unexecuted_blocks=1 00:09:39.239 00:09:39.239 ' 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:39.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.239 --rc genhtml_branch_coverage=1 00:09:39.239 --rc genhtml_function_coverage=1 00:09:39.239 --rc genhtml_legend=1 00:09:39.239 --rc geninfo_all_blocks=1 00:09:39.239 --rc geninfo_unexecuted_blocks=1 00:09:39.239 00:09:39.239 ' 00:09:39.239 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:39.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.239 --rc genhtml_branch_coverage=1 00:09:39.240 --rc genhtml_function_coverage=1 00:09:39.240 --rc genhtml_legend=1 00:09:39.240 --rc geninfo_all_blocks=1 00:09:39.240 --rc geninfo_unexecuted_blocks=1 00:09:39.240 00:09:39.240 ' 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:39.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:39.240 12:14:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:45.827 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:45.827 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:45.827 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:45.827 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:45.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:09:45.827 00:09:45.827 --- 10.0.0.2 ping statistics --- 00:09:45.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.827 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:09:45.827 00:09:45.827 --- 10.0.0.1 ping statistics --- 00:09:45.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.827 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:09:45.827 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.828 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:09:45.828 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:45.828 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.828 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:45.828 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:45.828 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.828 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:45.828 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:46.088 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:46.088 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:46.088 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:46.088 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.088 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1502986 00:09:46.088 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1502986 00:09:46.088 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:46.088 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1502986 ']' 00:09:46.088 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.088 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:46.088 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.088 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:46.088 12:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.088 [2024-11-04 12:14:20.494291] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:09:46.088 [2024-11-04 12:14:20.494343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.088 [2024-11-04 12:14:20.564011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.088 [2024-11-04 12:14:20.601626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.088 [2024-11-04 12:14:20.601662] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.088 [2024-11-04 12:14:20.601670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.088 [2024-11-04 12:14:20.601677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.088 [2024-11-04 12:14:20.601683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.088 [2024-11-04 12:14:20.603265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.088 [2024-11-04 12:14:20.603380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.088 [2024-11-04 12:14:20.603534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.088 [2024-11-04 12:14:20.603535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.029 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:47.029 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:47.029 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:47.029 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:47.029 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.029 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.030 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:47.030 [2024-11-04 12:14:21.481774] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.030 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.290 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:47.290 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.551 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:47.551 12:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.551 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:47.551 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.812 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:47.812 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:48.073 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:48.333 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:48.333 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:48.333 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:48.333 12:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:48.595 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:48.595 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:48.856 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:48.856 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:48.856 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:49.117 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:49.117 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:49.378 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:49.378 [2024-11-04 12:14:23.932016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.639 12:14:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:49.639 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:49.898 12:14:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:51.810 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:51.810 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:51.810 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:51.810 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:51.810 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:51.810 12:14:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:53.748 12:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:53.748 12:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:53.748 12:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:53.748 12:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:53.748 12:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:53.748 12:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:53.748 12:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:53.748 [global] 00:09:53.748 thread=1 00:09:53.748 invalidate=1 00:09:53.748 rw=write 00:09:53.748 time_based=1 00:09:53.748 runtime=1 00:09:53.748 ioengine=libaio 00:09:53.748 direct=1 00:09:53.748 bs=4096 00:09:53.748 iodepth=1 00:09:53.748 norandommap=0 00:09:53.748 numjobs=1 00:09:53.748 00:09:53.748 verify_dump=1 00:09:53.748 verify_backlog=512 00:09:53.748 verify_state_save=0 00:09:53.748 do_verify=1 00:09:53.748 verify=crc32c-intel 00:09:53.748 [job0] 00:09:53.748 filename=/dev/nvme0n1 00:09:53.748 [job1] 00:09:53.748 filename=/dev/nvme0n2 00:09:53.748 [job2] 00:09:53.748 filename=/dev/nvme0n3 00:09:53.748 [job3] 00:09:53.748 filename=/dev/nvme0n4 00:09:53.748 Could not set queue depth (nvme0n1) 00:09:53.748 Could not set queue depth (nvme0n2) 00:09:53.748 Could not set queue depth (nvme0n3) 00:09:53.748 Could not set queue depth (nvme0n4) 00:09:54.009 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.009 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.009 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.009 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.009 fio-3.35 00:09:54.009 Starting 4 threads 00:09:55.391 00:09:55.391 job0: (groupid=0, jobs=1): err= 0: pid=1504627: Mon Nov 4 12:14:29 2024 00:09:55.391 read: IOPS=16, BW=67.4KiB/s (69.0kB/s)(68.0KiB/1009msec) 00:09:55.391 slat (nsec): min=6843, max=26663, avg=24274.59, stdev=6030.44 00:09:55.391 clat (usec): min=892, max=42973, avg=39501.34, stdev=9962.92 00:09:55.391 lat (usec): min=902, max=43000, avg=39525.61, stdev=9966.74 00:09:55.391 clat percentiles (usec): 00:09:55.391 | 1.00th=[ 889], 5.00th=[ 889], 10.00th=[41157], 20.00th=[41157], 00:09:55.391 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:09:55.391 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:09:55.391 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:55.391 | 99.99th=[42730] 00:09:55.391 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:09:55.391 slat (nsec): min=9429, max=55777, avg=30190.54, stdev=9422.83 00:09:55.391 clat (usec): min=243, max=908, avg=621.05, stdev=113.02 00:09:55.391 lat (usec): min=255, max=941, avg=651.24, stdev=117.18 00:09:55.391 clat percentiles (usec): 00:09:55.391 | 1.00th=[ 359], 5.00th=[ 424], 10.00th=[ 465], 20.00th=[ 519], 00:09:55.391 | 30.00th=[ 578], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 652], 00:09:55.391 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 783], 00:09:55.391 | 99.00th=[ 865], 99.50th=[ 881], 99.90th=[ 906], 99.95th=[ 906], 00:09:55.391 | 99.99th=[ 906] 00:09:55.391 bw ( KiB/s): min= 4096, max= 4096, per=42.26%, avg=4096.00, stdev= 0.00, samples=1 00:09:55.391 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:55.391 lat (usec) : 250=0.19%, 500=16.64%, 750=68.81%, 1000=11.34% 00:09:55.391 lat (msec) : 50=3.02% 00:09:55.391 cpu : usr=0.69%, sys=2.28%, ctx=529, majf=0, minf=1 00:09:55.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:55.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.391 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:55.391 job1: (groupid=0, jobs=1): err= 0: pid=1504646: Mon Nov 4 12:14:29 2024 00:09:55.391 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:55.391 slat (nsec): min=6669, max=45982, avg=26405.15, stdev=4458.97 00:09:55.391 clat (usec): min=680, max=41845, avg=1052.12, stdev=1808.23 00:09:55.391 lat (usec): min=708, max=41871, avg=1078.52, stdev=1808.27 00:09:55.391 clat percentiles (usec): 00:09:55.391 | 1.00th=[ 750], 5.00th=[ 816], 10.00th=[ 857], 20.00th=[ 906], 00:09:55.391 | 30.00th=[ 938], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 1004], 00:09:55.391 | 70.00th=[ 1020], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1090], 00:09:55.391 | 99.00th=[ 1139], 99.50th=[ 1205], 99.90th=[41681], 99.95th=[41681], 00:09:55.391 | 99.99th=[41681] 00:09:55.391 write: IOPS=704, BW=2817KiB/s (2885kB/s)(2820KiB/1001msec); 0 zone resets 00:09:55.391 slat (nsec): min=10131, max=53724, avg=31831.06, stdev=8890.14 00:09:55.391 clat (usec): min=259, max=919, avg=589.14, stdev=113.80 00:09:55.391 lat (usec): min=270, max=958, avg=620.97, stdev=117.02 00:09:55.391 clat percentiles (usec): 00:09:55.391 | 1.00th=[ 302], 5.00th=[ 383], 10.00th=[ 445], 20.00th=[ 494], 00:09:55.391 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 619], 00:09:55.391 | 70.00th=[ 660], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 758], 00:09:55.391 | 99.00th=[ 848], 99.50th=[ 865], 99.90th=[ 922], 99.95th=[ 922], 00:09:55.391 | 99.99th=[ 922] 00:09:55.391 bw ( KiB/s): min= 4096, max= 4096, per=42.26%, avg=4096.00, stdev= 0.00, samples=1 00:09:55.391 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:55.391 lat (usec) : 500=12.90%, 750=41.74%, 1000=28.02% 00:09:55.391 lat (msec) : 2=17.26%, 50=0.08% 00:09:55.391 cpu : usr=2.60%, sys=3.30%, ctx=1219, majf=0, minf=1 00:09:55.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:55.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.391 issued rwts: total=512,705,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:55.391 job2: (groupid=0, jobs=1): err= 0: pid=1504668: Mon Nov 4 12:14:29 2024 00:09:55.391 read: IOPS=266, BW=1064KiB/s (1090kB/s)(1076KiB/1011msec) 00:09:55.391 slat (nsec): min=27374, max=62817, avg=29244.05, stdev=3398.78 00:09:55.391 clat (usec): min=774, max=42928, avg=2370.11, stdev=7395.36 00:09:55.391 lat (usec): min=804, max=42957, avg=2399.35, stdev=7395.12 00:09:55.391 clat percentiles (usec): 00:09:55.391 | 1.00th=[ 799], 5.00th=[ 873], 10.00th=[ 898], 20.00th=[ 938], 00:09:55.391 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1029], 00:09:55.391 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1156], 00:09:55.391 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:09:55.391 | 99.99th=[42730] 00:09:55.391 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:09:55.391 slat (nsec): min=9699, max=66492, avg=33636.52, stdev=9093.80 00:09:55.391 clat (usec): min=258, max=1568, avg=666.46, stdev=143.24 00:09:55.391 lat (usec): min=294, max=1609, avg=700.10, stdev=145.90 00:09:55.391 clat percentiles (usec): 00:09:55.391 | 1.00th=[ 343], 5.00th=[ 449], 10.00th=[ 502], 20.00th=[ 545], 00:09:55.391 | 30.00th=[ 586], 40.00th=[ 635], 50.00th=[ 668], 60.00th=[ 701], 00:09:55.391 | 70.00th=[ 742], 80.00th=[ 775], 90.00th=[ 832], 95.00th=[ 906], 00:09:55.391 | 99.00th=[ 988], 99.50th=[ 1029], 99.90th=[ 1565], 99.95th=[ 1565], 00:09:55.391 | 99.99th=[ 1565] 00:09:55.391 bw ( KiB/s): min= 4096, max= 4096, per=42.26%, avg=4096.00, stdev= 0.00, samples=1 00:09:55.391 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:55.391 lat (usec) : 500=6.53%, 750=41.23%, 1000=33.80% 00:09:55.391 lat (msec) : 2=17.29%, 50=1.15% 00:09:55.391 cpu : usr=1.58%, sys=3.27%, ctx=783, majf=0, minf=1 00:09:55.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:55.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.391 issued rwts: total=269,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:55.391 job3: (groupid=0, jobs=1): err= 0: pid=1504675: Mon Nov 4 12:14:29 2024 00:09:55.391 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:55.391 slat (nsec): min=8874, max=61775, avg=27468.61, stdev=4172.37 00:09:55.391 clat (usec): min=787, max=1207, avg=1005.62, stdev=70.42 00:09:55.391 lat (usec): min=796, max=1234, avg=1033.09, stdev=71.01 00:09:55.391 clat percentiles (usec): 00:09:55.391 | 1.00th=[ 816], 5.00th=[ 873], 10.00th=[ 914], 20.00th=[ 955], 00:09:55.391 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1004], 60.00th=[ 1029], 00:09:55.391 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:09:55.391 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[ 1205], 99.95th=[ 1205], 00:09:55.391 | 99.99th=[ 1205] 00:09:55.391 write: IOPS=720, BW=2881KiB/s (2950kB/s)(2884KiB/1001msec); 0 zone resets 00:09:55.391 slat (nsec): min=9661, max=55680, avg=31200.38, stdev=10045.59 00:09:55.391 clat (usec): min=242, max=923, avg=609.59, stdev=113.98 00:09:55.391 lat (usec): min=253, max=958, avg=640.79, stdev=118.87 00:09:55.391 clat percentiles (usec): 00:09:55.391 | 1.00th=[ 351], 5.00th=[ 400], 10.00th=[ 457], 20.00th=[ 519], 00:09:55.391 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:09:55.391 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 783], 00:09:55.391 | 99.00th=[ 865], 99.50th=[ 881], 99.90th=[ 922], 99.95th=[ 922], 00:09:55.391 | 99.99th=[ 922] 00:09:55.391 bw ( KiB/s): min= 4096, max= 4096, per=42.26%, avg=4096.00, stdev= 0.00, samples=1 00:09:55.391 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:55.391 lat (usec) : 250=0.08%, 500=10.38%, 750=42.25%, 1000=23.68% 00:09:55.391 lat (msec) : 2=23.60% 00:09:55.391 cpu : usr=2.60%, sys=4.70%, ctx=1234, majf=0, minf=1 00:09:55.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:55.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.391 issued rwts: total=512,721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:55.391 00:09:55.391 Run status group 0 (all jobs): 00:09:55.391 READ: bw=5183KiB/s (5307kB/s), 67.4KiB/s-2046KiB/s (69.0kB/s-2095kB/s), io=5240KiB (5366kB), run=1001-1011msec 00:09:55.391 WRITE: bw=9693KiB/s (9926kB/s), 2026KiB/s-2881KiB/s (2074kB/s-2950kB/s), io=9800KiB (10.0MB), run=1001-1011msec 00:09:55.391 00:09:55.391 Disk stats (read/write): 00:09:55.391 nvme0n1: ios=62/512, merge=0/0, ticks=605/242, in_queue=847, util=96.29% 00:09:55.391 nvme0n2: ios=489/512, merge=0/0, ticks=1441/299, in_queue=1740, util=97.04% 00:09:55.391 nvme0n3: ios=276/512, merge=0/0, ticks=1115/263, in_queue=1378, util=96.93% 00:09:55.391 nvme0n4: ios=498/512, merge=0/0, ticks=1369/254, in_queue=1623, util=96.90% 00:09:55.391 12:14:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:55.391 [global] 00:09:55.391 thread=1 00:09:55.391 invalidate=1 00:09:55.391 rw=randwrite 00:09:55.391 time_based=1 00:09:55.391 runtime=1 00:09:55.391 ioengine=libaio 00:09:55.391 direct=1 00:09:55.391 bs=4096 00:09:55.391 iodepth=1 00:09:55.392 norandommap=0 00:09:55.392 numjobs=1 00:09:55.392 00:09:55.392 verify_dump=1 00:09:55.392 verify_backlog=512 00:09:55.392 verify_state_save=0 00:09:55.392 do_verify=1 00:09:55.392 verify=crc32c-intel 00:09:55.392 [job0] 00:09:55.392 filename=/dev/nvme0n1 00:09:55.392 [job1] 00:09:55.392 filename=/dev/nvme0n2 00:09:55.392 [job2] 00:09:55.392 filename=/dev/nvme0n3 00:09:55.392 [job3] 00:09:55.392 filename=/dev/nvme0n4 00:09:55.392 Could not set queue depth (nvme0n1) 00:09:55.392 Could not set queue depth (nvme0n2) 00:09:55.392 Could not set queue depth (nvme0n3) 00:09:55.392 Could not set queue depth (nvme0n4) 00:09:55.651 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.651 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.651 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.651 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.651 fio-3.35 00:09:55.651 Starting 4 threads 00:09:57.062 00:09:57.062 job0: (groupid=0, jobs=1): err= 0: pid=1505151: Mon Nov 4 12:14:31 2024 00:09:57.062 read: IOPS=15, BW=61.5KiB/s (63.0kB/s)(64.0KiB/1040msec) 00:09:57.062 slat (nsec): min=24411, max=25244, avg=24723.06, stdev=229.20 00:09:57.062 clat (usec): min=41539, max=43007, avg=41996.15, stdev=292.68 00:09:57.062 lat (usec): min=41564, max=43032, avg=42020.87, stdev=292.69 00:09:57.062 clat percentiles (usec): 00:09:57.062 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:09:57.062 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:57.062 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43254], 00:09:57.062 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:57.062 | 99.99th=[43254] 00:09:57.062 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:09:57.062 slat (nsec): min=8903, max=50943, avg=27598.68, stdev=9487.34 00:09:57.062 clat (usec): min=243, max=960, avg=683.32, stdev=110.10 00:09:57.062 lat (usec): min=253, max=991, avg=710.92, stdev=114.46 00:09:57.062 clat percentiles (usec): 00:09:57.062 | 1.00th=[ 392], 5.00th=[ 486], 10.00th=[ 529], 20.00th=[ 594], 00:09:57.062 | 30.00th=[ 635], 40.00th=[ 668], 50.00th=[ 701], 60.00th=[ 717], 00:09:57.062 | 70.00th=[ 742], 80.00th=[ 783], 90.00th=[ 816], 95.00th=[ 840], 00:09:57.062 | 99.00th=[ 889], 99.50th=[ 906], 99.90th=[ 963], 99.95th=[ 963], 00:09:57.062 | 99.99th=[ 963] 00:09:57.062 bw ( KiB/s): min= 4096, max= 4096, per=44.17%, avg=4096.00, stdev= 0.00, samples=1 00:09:57.062 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:57.062 lat (usec) : 250=0.19%, 500=5.68%, 750=62.69%, 1000=28.41% 00:09:57.062 lat (msec) : 50=3.03% 00:09:57.062 cpu : usr=0.77%, sys=1.25%, ctx=529, majf=0, minf=2 00:09:57.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.062 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.062 job1: (groupid=0, jobs=1): err= 0: pid=1505163: Mon Nov 4 12:14:31 2024 00:09:57.062 read: IOPS=408, BW=1633KiB/s (1672kB/s)(1656KiB/1014msec) 00:09:57.062 slat (nsec): min=9024, max=70061, avg=26124.59, stdev=3918.13 00:09:57.062 clat (usec): min=683, max=41913, avg=1578.82, stdev=4877.42 00:09:57.062 lat (usec): min=709, max=41939, avg=1604.95, stdev=4877.37 00:09:57.062 clat percentiles (usec): 00:09:57.062 | 1.00th=[ 725], 5.00th=[ 816], 10.00th=[ 865], 20.00th=[ 914], 00:09:57.062 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1020], 00:09:57.062 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:09:57.062 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:57.062 | 99.99th=[41681] 00:09:57.062 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:09:57.062 slat (nsec): min=9737, max=72474, avg=30036.71, stdev=9248.98 00:09:57.062 clat (usec): min=280, max=976, avg=637.43, stdev=122.09 00:09:57.062 lat (usec): min=304, max=1009, avg=667.46, stdev=125.49 00:09:57.062 clat percentiles (usec): 00:09:57.062 | 1.00th=[ 338], 5.00th=[ 400], 10.00th=[ 469], 20.00th=[ 545], 00:09:57.062 | 30.00th=[ 578], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 685], 00:09:57.062 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 816], 00:09:57.062 | 99.00th=[ 889], 99.50th=[ 914], 99.90th=[ 979], 99.95th=[ 979], 00:09:57.062 | 99.99th=[ 979] 00:09:57.062 bw ( KiB/s): min= 4096, max= 4096, per=44.17%, avg=4096.00, stdev= 0.00, samples=1 00:09:57.062 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:57.062 lat (usec) : 500=7.67%, 750=38.77%, 1000=29.37% 00:09:57.062 lat (msec) : 2=23.54%, 50=0.65% 00:09:57.062 cpu : usr=1.09%, sys=2.96%, ctx=927, majf=0, minf=1 00:09:57.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.062 issued rwts: total=414,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.062 job2: (groupid=0, jobs=1): err= 0: pid=1505183: Mon Nov 4 12:14:31 2024 00:09:57.062 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:57.062 slat (nsec): min=7277, max=45522, avg=26208.90, stdev=3686.58 00:09:57.062 clat (usec): min=540, max=1388, avg=1090.20, stdev=115.03 00:09:57.062 lat (usec): min=566, max=1414, avg=1116.41, stdev=115.12 00:09:57.062 clat percentiles (usec): 00:09:57.063 | 1.00th=[ 799], 5.00th=[ 873], 10.00th=[ 930], 20.00th=[ 1012], 00:09:57.063 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1123], 00:09:57.063 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1270], 00:09:57.063 | 99.00th=[ 1336], 99.50th=[ 1369], 99.90th=[ 1385], 99.95th=[ 1385], 00:09:57.063 | 99.99th=[ 1385] 00:09:57.063 write: IOPS=627, BW=2509KiB/s (2570kB/s)(2512KiB/1001msec); 0 zone resets 00:09:57.063 slat (nsec): min=9418, max=51695, avg=28630.01, stdev=9071.39 00:09:57.063 clat (usec): min=266, max=958, avg=639.22, stdev=118.14 00:09:57.063 lat (usec): min=276, max=991, avg=667.85, stdev=121.70 00:09:57.063 clat percentiles (usec): 00:09:57.063 | 1.00th=[ 359], 5.00th=[ 420], 10.00th=[ 478], 20.00th=[ 537], 00:09:57.063 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 685], 00:09:57.063 | 70.00th=[ 709], 80.00th=[ 734], 90.00th=[ 775], 95.00th=[ 807], 00:09:57.063 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 955], 99.95th=[ 955], 00:09:57.063 | 99.99th=[ 955] 00:09:57.063 bw ( KiB/s): min= 4096, max= 4096, per=44.17%, avg=4096.00, stdev= 0.00, samples=1 00:09:57.063 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:57.063 lat (usec) : 500=7.37%, 750=39.91%, 1000=16.23% 00:09:57.063 lat (msec) : 2=36.49% 00:09:57.063 cpu : usr=1.20%, sys=3.80%, ctx=1140, majf=0, minf=1 00:09:57.063 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.063 issued rwts: total=512,628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.063 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.063 job3: (groupid=0, jobs=1): err= 0: pid=1505190: Mon Nov 4 12:14:31 2024 00:09:57.063 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:57.063 slat (nsec): min=24783, max=44416, avg=25661.76, stdev=1300.99 00:09:57.063 clat (usec): min=638, max=1248, avg=961.65, stdev=88.94 00:09:57.063 lat (usec): min=664, max=1292, avg=987.31, stdev=88.93 00:09:57.063 clat percentiles (usec): 00:09:57.063 | 1.00th=[ 709], 5.00th=[ 758], 10.00th=[ 840], 20.00th=[ 906], 00:09:57.063 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 979], 60.00th=[ 996], 00:09:57.063 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[ 1074], 00:09:57.063 | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1254], 99.95th=[ 1254], 00:09:57.063 | 99.99th=[ 1254] 00:09:57.063 write: IOPS=758, BW=3033KiB/s (3106kB/s)(3036KiB/1001msec); 0 zone resets 00:09:57.063 slat (nsec): min=9301, max=52612, avg=28527.32, stdev=8604.85 00:09:57.063 clat (usec): min=217, max=889, avg=610.62, stdev=110.41 00:09:57.063 lat (usec): min=227, max=920, avg=639.14, stdev=114.38 00:09:57.063 clat percentiles (usec): 00:09:57.063 | 1.00th=[ 334], 5.00th=[ 400], 10.00th=[ 457], 20.00th=[ 519], 00:09:57.063 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:09:57.063 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 766], 00:09:57.063 | 99.00th=[ 840], 99.50th=[ 857], 99.90th=[ 889], 99.95th=[ 889], 00:09:57.063 | 99.99th=[ 889] 00:09:57.063 bw ( KiB/s): min= 4096, max= 4096, per=44.17%, avg=4096.00, stdev= 0.00, samples=1 00:09:57.063 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:57.063 lat (usec) : 250=0.08%, 500=9.68%, 750=46.50%, 1000=29.90% 00:09:57.063 lat (msec) : 2=13.85% 00:09:57.063 cpu : usr=2.00%, sys=3.50%, ctx=1271, majf=0, minf=2 00:09:57.063 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:57.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.063 issued rwts: total=512,759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.063 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:57.063 00:09:57.063 Run status group 0 (all jobs): 00:09:57.063 READ: bw=5592KiB/s (5727kB/s), 61.5KiB/s-2046KiB/s (63.0kB/s-2095kB/s), io=5816KiB (5956kB), run=1001-1040msec 00:09:57.063 WRITE: bw=9273KiB/s (9496kB/s), 1969KiB/s-3033KiB/s (2016kB/s-3106kB/s), io=9644KiB (9875kB), run=1001-1040msec 00:09:57.063 00:09:57.063 Disk stats (read/write): 00:09:57.063 nvme0n1: ios=61/512, merge=0/0, ticks=564/323, in_queue=887, util=92.18% 00:09:57.063 nvme0n2: ios=384/512, merge=0/0, ticks=1434/310, in_queue=1744, util=97.25% 00:09:57.063 nvme0n3: ios=479/512, merge=0/0, ticks=559/319, in_queue=878, util=92.30% 00:09:57.063 nvme0n4: ios=504/512, merge=0/0, ticks=487/298, in_queue=785, util=89.53% 00:09:57.063 12:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:57.063 [global] 00:09:57.063 thread=1 00:09:57.063 invalidate=1 00:09:57.063 rw=write 00:09:57.063 time_based=1 00:09:57.063 runtime=1 00:09:57.063 ioengine=libaio 00:09:57.063 direct=1 00:09:57.063 bs=4096 00:09:57.063 iodepth=128 00:09:57.063 norandommap=0 00:09:57.063 numjobs=1 00:09:57.063 00:09:57.063 verify_dump=1 00:09:57.063 verify_backlog=512 00:09:57.063 verify_state_save=0 00:09:57.063 do_verify=1 00:09:57.063 verify=crc32c-intel 00:09:57.063 [job0] 00:09:57.063 filename=/dev/nvme0n1 00:09:57.063 [job1] 00:09:57.063 filename=/dev/nvme0n2 00:09:57.063 [job2] 00:09:57.063 filename=/dev/nvme0n3 00:09:57.063 [job3] 00:09:57.063 filename=/dev/nvme0n4 00:09:57.063 Could not set queue depth (nvme0n1) 00:09:57.063 Could not set queue depth (nvme0n2) 00:09:57.063 Could not set queue depth (nvme0n3) 00:09:57.063 Could not set queue depth (nvme0n4) 00:09:57.326 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:57.326 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:57.326 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:57.326 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:57.326 fio-3.35 00:09:57.326 Starting 4 threads 00:09:58.730 00:09:58.730 job0: (groupid=0, jobs=1): err= 0: pid=1505666: Mon Nov 4 12:14:32 2024 00:09:58.730 read: IOPS=7005, BW=27.4MiB/s (28.7MB/s)(27.5MiB/1004msec) 00:09:58.730 slat (nsec): min=948, max=13741k, avg=68046.26, stdev=584829.34 00:09:58.730 clat (usec): min=1664, max=27649, avg=9120.61, stdev=3676.27 00:09:58.730 lat (usec): min=1684, max=27676, avg=9188.65, stdev=3705.87 00:09:58.730 clat percentiles (usec): 00:09:58.730 | 1.00th=[ 3720], 5.00th=[ 5800], 10.00th=[ 6259], 20.00th=[ 6980], 00:09:58.730 | 30.00th=[ 7373], 40.00th=[ 7701], 50.00th=[ 7963], 60.00th=[ 8225], 00:09:58.730 | 70.00th=[ 9110], 80.00th=[10290], 90.00th=[13960], 95.00th=[18744], 00:09:58.730 | 99.00th=[21103], 99.50th=[25560], 99.90th=[25560], 99.95th=[25822], 00:09:58.730 | 99.99th=[27657] 00:09:58.730 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:09:58.730 slat (nsec): min=1620, max=44373k, avg=63328.14, stdev=640558.18 00:09:58.730 clat (usec): min=1575, max=51141, avg=7936.46, stdev=3088.16 00:09:58.730 lat (usec): min=1584, max=56541, avg=7999.79, stdev=3172.68 00:09:58.730 clat percentiles (usec): 00:09:58.730 | 1.00th=[ 2376], 5.00th=[ 4047], 10.00th=[ 4752], 20.00th=[ 6259], 00:09:58.730 | 30.00th=[ 6849], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7570], 00:09:58.730 | 70.00th=[ 7898], 80.00th=[ 9110], 90.00th=[12125], 95.00th=[14615], 00:09:58.730 | 99.00th=[17957], 99.50th=[18744], 99.90th=[19792], 99.95th=[22152], 00:09:58.730 | 99.99th=[51119] 00:09:58.730 bw ( KiB/s): min=24576, max=32768, per=27.79%, avg=28672.00, stdev=5792.62, samples=2 00:09:58.730 iops : min= 6144, max= 8192, avg=7168.00, stdev=1448.15, samples=2 00:09:58.730 lat (msec) : 2=0.31%, 4=3.03%, 10=76.31%, 20=18.88%, 50=1.46% 00:09:58.730 lat (msec) : 100=0.01% 00:09:58.730 cpu : usr=5.28%, sys=6.48%, ctx=673, majf=0, minf=1 00:09:58.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:58.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.730 issued rwts: total=7034,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.730 job1: (groupid=0, jobs=1): err= 0: pid=1505670: Mon Nov 4 12:14:32 2024 00:09:58.730 read: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec) 00:09:58.730 slat (nsec): min=942, max=8773.8k, avg=70369.22, stdev=527122.30 00:09:58.730 clat (usec): min=2287, max=48854, avg=9331.69, stdev=3661.92 00:09:58.730 lat (usec): min=2344, max=49021, avg=9402.06, stdev=3704.53 00:09:58.730 clat percentiles (usec): 00:09:58.730 | 1.00th=[ 3261], 5.00th=[ 5669], 10.00th=[ 6063], 20.00th=[ 7046], 00:09:58.730 | 30.00th=[ 7570], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 9241], 00:09:58.730 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[13304], 95.00th=[15270], 00:09:58.730 | 99.00th=[20841], 99.50th=[23987], 99.90th=[44827], 99.95th=[44827], 00:09:58.730 | 99.99th=[49021] 00:09:58.730 write: IOPS=6981, BW=27.3MiB/s (28.6MB/s)(27.5MiB/1007msec); 0 zone resets 00:09:58.730 slat (nsec): min=1677, max=9328.1k, avg=59554.25, stdev=350042.77 00:09:58.730 clat (usec): min=550, max=28022, avg=9314.06, stdev=4860.29 00:09:58.730 lat (usec): min=560, max=28026, avg=9373.62, stdev=4897.61 00:09:58.730 clat percentiles (usec): 00:09:58.730 | 1.00th=[ 1516], 5.00th=[ 3326], 10.00th=[ 4047], 20.00th=[ 5735], 00:09:58.730 | 30.00th=[ 6718], 40.00th=[ 7177], 50.00th=[ 7570], 60.00th=[ 8160], 00:09:58.730 | 70.00th=[10683], 80.00th=[14353], 90.00th=[17171], 95.00th=[18744], 00:09:58.730 | 99.00th=[21103], 99.50th=[21627], 99.90th=[21890], 99.95th=[22676], 00:09:58.730 | 99.99th=[27919] 00:09:58.730 bw ( KiB/s): min=22456, max=32768, per=26.76%, avg=27612.00, stdev=7291.69, samples=2 00:09:58.730 iops : min= 5614, max= 8192, avg=6903.00, stdev=1822.92, samples=2 00:09:58.730 lat (usec) : 750=0.09%, 1000=0.08% 00:09:58.730 lat (msec) : 2=0.58%, 4=4.84%, 10=64.31%, 20=27.69%, 50=2.42% 00:09:58.730 cpu : usr=4.67%, sys=7.85%, ctx=612, majf=0, minf=2 00:09:58.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:58.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.730 issued rwts: total=6656,7030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.730 job2: (groupid=0, jobs=1): err= 0: pid=1505675: Mon Nov 4 12:14:32 2024 00:09:58.730 read: IOPS=3921, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1004msec) 00:09:58.730 slat (nsec): min=948, max=25150k, avg=142389.38, stdev=1097100.81 00:09:58.730 clat (usec): min=1456, max=66111, avg=18342.69, stdev=13660.03 00:09:58.730 lat (usec): min=1466, max=66115, avg=18485.08, stdev=13730.13 00:09:58.730 clat percentiles (usec): 00:09:58.730 | 1.00th=[ 5080], 5.00th=[ 6063], 10.00th=[ 8455], 20.00th=[10421], 00:09:58.730 | 30.00th=[11600], 40.00th=[12518], 50.00th=[12780], 60.00th=[14484], 00:09:58.730 | 70.00th=[16188], 80.00th=[25297], 90.00th=[38536], 95.00th=[55837], 00:09:58.730 | 99.00th=[65274], 99.50th=[66323], 99.90th=[66323], 99.95th=[66323], 00:09:58.730 | 99.99th=[66323] 00:09:58.730 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:09:58.730 slat (nsec): min=1779, max=12575k, avg=98773.67, stdev=612722.82 00:09:58.730 clat (usec): min=576, max=33591, avg=13412.00, stdev=5136.98 00:09:58.730 lat (usec): min=610, max=33601, avg=13510.77, stdev=5153.96 00:09:58.730 clat percentiles (usec): 00:09:58.730 | 1.00th=[ 1319], 5.00th=[ 6783], 10.00th=[ 8848], 20.00th=[11207], 00:09:58.730 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12518], 60.00th=[13304], 00:09:58.730 | 70.00th=[13698], 80.00th=[14353], 90.00th=[19268], 95.00th=[24773], 00:09:58.730 | 99.00th=[32375], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:09:58.730 | 99.99th=[33817] 00:09:58.730 bw ( KiB/s): min=16384, max=16384, per=15.88%, avg=16384.00, stdev= 0.00, samples=2 00:09:58.730 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:58.730 lat (usec) : 750=0.04%, 1000=0.04% 00:09:58.730 lat (msec) : 2=1.27%, 4=0.46%, 10=13.69%, 20=68.72%, 50=13.05% 00:09:58.730 lat (msec) : 100=2.74% 00:09:58.730 cpu : usr=2.59%, sys=5.28%, ctx=302, majf=0, minf=1 00:09:58.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:58.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.730 issued rwts: total=3937,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.730 job3: (groupid=0, jobs=1): err= 0: pid=1505683: Mon Nov 4 12:14:32 2024 00:09:58.730 read: IOPS=7494, BW=29.3MiB/s (30.7MB/s)(29.5MiB/1007msec) 00:09:58.730 slat (nsec): min=989, max=8242.0k, avg=69682.62, stdev=520892.41 00:09:58.730 clat (usec): min=3566, max=19767, avg=8974.17, stdev=2186.95 00:09:58.730 lat (usec): min=3817, max=19815, avg=9043.85, stdev=2218.57 00:09:58.731 clat percentiles (usec): 00:09:58.731 | 1.00th=[ 4293], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 7373], 00:09:58.731 | 30.00th=[ 7767], 40.00th=[ 8029], 50.00th=[ 8291], 60.00th=[ 8979], 00:09:58.731 | 70.00th=[ 9503], 80.00th=[10552], 90.00th=[12387], 95.00th=[13698], 00:09:58.731 | 99.00th=[15139], 99.50th=[15926], 99.90th=[16581], 99.95th=[16712], 00:09:58.731 | 99.99th=[19792] 00:09:58.731 write: IOPS=7626, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1007msec); 0 zone resets 00:09:58.731 slat (nsec): min=1677, max=6956.5k, avg=56443.64, stdev=310889.40 00:09:58.731 clat (usec): min=731, max=18755, avg=7705.42, stdev=2358.50 00:09:58.731 lat (usec): min=740, max=18757, avg=7761.86, stdev=2371.41 00:09:58.731 clat percentiles (usec): 00:09:58.731 | 1.00th=[ 2900], 5.00th=[ 3949], 10.00th=[ 4686], 20.00th=[ 5997], 00:09:58.731 | 30.00th=[ 6849], 40.00th=[ 7504], 50.00th=[ 7898], 60.00th=[ 8094], 00:09:58.731 | 70.00th=[ 8586], 80.00th=[ 9110], 90.00th=[ 9634], 95.00th=[10945], 00:09:58.731 | 99.00th=[17957], 99.50th=[18482], 99.90th=[18744], 99.95th=[18744], 00:09:58.731 | 99.99th=[18744] 00:09:58.731 bw ( KiB/s): min=29936, max=31504, per=29.78%, avg=30720.00, stdev=1108.74, samples=2 00:09:58.731 iops : min= 7484, max= 7876, avg=7680.00, stdev=277.19, samples=2 00:09:58.731 lat (usec) : 750=0.01%, 1000=0.01% 00:09:58.731 lat (msec) : 2=0.08%, 4=2.65%, 10=79.88%, 20=17.36% 00:09:58.731 cpu : usr=6.66%, sys=6.66%, ctx=801, majf=0, minf=1 00:09:58.731 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:58.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.731 issued rwts: total=7547,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.731 00:09:58.731 Run status group 0 (all jobs): 00:09:58.731 READ: bw=97.7MiB/s (102MB/s), 15.3MiB/s-29.3MiB/s (16.1MB/s-30.7MB/s), io=98.3MiB (103MB), run=1004-1007msec 00:09:58.731 WRITE: bw=101MiB/s (106MB/s), 15.9MiB/s-29.8MiB/s (16.7MB/s-31.2MB/s), io=101MiB (106MB), run=1004-1007msec 00:09:58.731 00:09:58.731 Disk stats (read/write): 00:09:58.731 nvme0n1: ios=5548/5632, merge=0/0, ticks=48210/43394, in_queue=91604, util=86.17% 00:09:58.731 nvme0n2: ios=5681/6143, merge=0/0, ticks=49319/50927, in_queue=100246, util=88.07% 00:09:58.731 nvme0n3: ios=3450/3584, merge=0/0, ticks=22751/19499, in_queue=42250, util=93.02% 00:09:58.731 nvme0n4: ios=6198/6390, merge=0/0, ticks=52912/47381, in_queue=100293, util=97.01% 00:09:58.731 12:14:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:58.731 [global] 00:09:58.731 thread=1 00:09:58.731 invalidate=1 00:09:58.731 rw=randwrite 00:09:58.731 time_based=1 00:09:58.731 runtime=1 00:09:58.731 ioengine=libaio 00:09:58.731 direct=1 00:09:58.731 bs=4096 00:09:58.731 iodepth=128 00:09:58.731 norandommap=0 00:09:58.731 numjobs=1 00:09:58.731 00:09:58.731 verify_dump=1 00:09:58.731 verify_backlog=512 00:09:58.731 verify_state_save=0 00:09:58.731 do_verify=1 00:09:58.731 verify=crc32c-intel 00:09:58.731 [job0] 00:09:58.731 filename=/dev/nvme0n1 00:09:58.731 [job1] 00:09:58.731 filename=/dev/nvme0n2 00:09:58.731 [job2] 00:09:58.731 filename=/dev/nvme0n3 00:09:58.731 [job3] 00:09:58.731 filename=/dev/nvme0n4 00:09:58.731 Could not set queue depth (nvme0n1) 00:09:58.731 Could not set queue depth (nvme0n2) 00:09:58.731 Could not set queue depth (nvme0n3) 00:09:58.731 Could not set queue depth (nvme0n4) 00:09:58.993 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.993 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.993 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.993 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.993 fio-3.35 00:09:58.993 Starting 4 threads 00:10:00.401 00:10:00.401 job0: (groupid=0, jobs=1): err= 0: pid=1506182: Mon Nov 4 12:14:34 2024 00:10:00.401 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:10:00.401 slat (nsec): min=916, max=24522k, avg=117285.94, stdev=957090.14 00:10:00.401 clat (usec): min=1284, max=46940, avg=15846.55, stdev=8777.90 00:10:00.401 lat (usec): min=1291, max=54174, avg=15963.84, stdev=8875.26 00:10:00.401 clat percentiles (usec): 00:10:00.401 | 1.00th=[ 2245], 5.00th=[ 2900], 10.00th=[ 4293], 20.00th=[ 8094], 00:10:00.401 | 30.00th=[10028], 40.00th=[12518], 50.00th=[15401], 60.00th=[17695], 00:10:00.401 | 70.00th=[21365], 80.00th=[22938], 90.00th=[26084], 95.00th=[30278], 00:10:00.401 | 99.00th=[39584], 99.50th=[44303], 99.90th=[46924], 99.95th=[46924], 00:10:00.401 | 99.99th=[46924] 00:10:00.401 write: IOPS=4956, BW=19.4MiB/s (20.3MB/s)(19.4MiB/1003msec); 0 zone resets 00:10:00.401 slat (nsec): min=1590, max=18393k, avg=87721.58, stdev=624256.01 00:10:00.401 clat (usec): min=421, max=85025, avg=12560.98, stdev=12790.45 00:10:00.401 lat (usec): min=464, max=86077, avg=12648.70, stdev=12871.68 00:10:00.401 clat percentiles (usec): 00:10:00.401 | 1.00th=[ 701], 5.00th=[ 1762], 10.00th=[ 3523], 20.00th=[ 5211], 00:10:00.401 | 30.00th=[ 6325], 40.00th=[ 7177], 50.00th=[ 9372], 60.00th=[11076], 00:10:00.401 | 70.00th=[13829], 80.00th=[15270], 90.00th=[21890], 95.00th=[40633], 00:10:00.401 | 99.00th=[70779], 99.50th=[74974], 99.90th=[85459], 99.95th=[85459], 00:10:00.401 | 99.99th=[85459] 00:10:00.401 bw ( KiB/s): min=15256, max=23496, per=27.54%, avg=19376.00, stdev=5826.56, samples=2 00:10:00.401 iops : min= 3814, max= 5874, avg=4844.00, stdev=1456.64, samples=2 00:10:00.401 lat (usec) : 500=0.18%, 750=0.56%, 1000=0.33% 00:10:00.401 lat (msec) : 2=3.28%, 4=7.19%, 10=30.41%, 20=36.63%, 50=19.63% 00:10:00.401 lat (msec) : 100=1.80% 00:10:00.401 cpu : usr=3.19%, sys=5.89%, ctx=358, majf=0, minf=1 00:10:00.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:00.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.401 issued rwts: total=4096,4971,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.401 job1: (groupid=0, jobs=1): err= 0: pid=1506188: Mon Nov 4 12:14:34 2024 00:10:00.401 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:10:00.401 slat (nsec): min=879, max=13109k, avg=75288.40, stdev=561312.89 00:10:00.401 clat (usec): min=3781, max=29716, avg=10553.58, stdev=4111.96 00:10:00.401 lat (usec): min=3783, max=29742, avg=10628.87, stdev=4152.63 00:10:00.401 clat percentiles (usec): 00:10:00.401 | 1.00th=[ 4817], 5.00th=[ 5800], 10.00th=[ 6325], 20.00th=[ 7111], 00:10:00.401 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 9634], 60.00th=[11076], 00:10:00.401 | 70.00th=[11994], 80.00th=[13304], 90.00th=[16712], 95.00th=[18482], 00:10:00.401 | 99.00th=[23725], 99.50th=[23987], 99.90th=[25035], 99.95th=[25035], 00:10:00.401 | 99.99th=[29754] 00:10:00.401 write: IOPS=6241, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1003msec); 0 zone resets 00:10:00.401 slat (nsec): min=1490, max=15770k, avg=74051.84, stdev=555152.32 00:10:00.401 clat (usec): min=341, max=54846, avg=9966.23, stdev=6662.50 00:10:00.401 lat (usec): min=1267, max=54856, avg=10040.28, stdev=6706.37 00:10:00.401 clat percentiles (usec): 00:10:00.401 | 1.00th=[ 4359], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 6587], 00:10:00.401 | 30.00th=[ 7177], 40.00th=[ 7439], 50.00th=[ 7898], 60.00th=[ 9241], 00:10:00.401 | 70.00th=[ 9896], 80.00th=[11600], 90.00th=[15139], 95.00th=[17957], 00:10:00.401 | 99.00th=[46400], 99.50th=[51119], 99.90th=[54789], 99.95th=[54789], 00:10:00.401 | 99.99th=[54789] 00:10:00.401 bw ( KiB/s): min=20480, max=28688, per=34.94%, avg=24584.00, stdev=5803.93, samples=2 00:10:00.401 iops : min= 5120, max= 7172, avg=6146.00, stdev=1450.98, samples=2 00:10:00.401 lat (usec) : 500=0.01% 00:10:00.401 lat (msec) : 2=0.02%, 4=0.48%, 10=61.65%, 20=34.67%, 50=2.75% 00:10:00.401 lat (msec) : 100=0.44% 00:10:00.401 cpu : usr=4.99%, sys=5.39%, ctx=484, majf=0, minf=1 00:10:00.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:00.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.401 issued rwts: total=6144,6260,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.401 job2: (groupid=0, jobs=1): err= 0: pid=1506198: Mon Nov 4 12:14:34 2024 00:10:00.401 read: IOPS=4494, BW=17.6MiB/s (18.4MB/s)(17.6MiB/1002msec) 00:10:00.401 slat (nsec): min=936, max=18079k, avg=100807.59, stdev=795616.84 00:10:00.401 clat (usec): min=1217, max=48689, avg=12643.40, stdev=7335.34 00:10:00.401 lat (usec): min=3390, max=48698, avg=12744.21, stdev=7408.37 00:10:00.401 clat percentiles (usec): 00:10:00.401 | 1.00th=[ 3425], 5.00th=[ 6259], 10.00th=[ 7635], 20.00th=[ 8094], 00:10:00.401 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[11469], 00:10:00.401 | 70.00th=[13435], 80.00th=[17695], 90.00th=[21103], 95.00th=[25822], 00:10:00.401 | 99.00th=[48497], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:10:00.401 | 99.99th=[48497] 00:10:00.401 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:10:00.401 slat (nsec): min=1542, max=29920k, avg=111735.45, stdev=895872.69 00:10:00.401 clat (usec): min=374, max=70538, avg=15062.79, stdev=10539.11 00:10:00.401 lat (usec): min=401, max=70570, avg=15174.52, stdev=10625.68 00:10:00.401 clat percentiles (usec): 00:10:00.401 | 1.00th=[ 2704], 5.00th=[ 4817], 10.00th=[ 6521], 20.00th=[ 8029], 00:10:00.401 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[11469], 60.00th=[13304], 00:10:00.401 | 70.00th=[15270], 80.00th=[21627], 90.00th=[30016], 95.00th=[40633], 00:10:00.401 | 99.00th=[53740], 99.50th=[53740], 99.90th=[54264], 99.95th=[54264], 00:10:00.401 | 99.99th=[70779] 00:10:00.401 bw ( KiB/s): min=12288, max=24576, per=26.20%, avg=18432.00, stdev=8688.93, samples=2 00:10:00.401 iops : min= 3072, max= 6144, avg=4608.00, stdev=2172.23, samples=2 00:10:00.401 lat (usec) : 500=0.02% 00:10:00.401 lat (msec) : 2=0.11%, 4=1.34%, 10=46.90%, 20=34.98%, 50=15.94% 00:10:00.401 lat (msec) : 100=0.71% 00:10:00.401 cpu : usr=2.70%, sys=5.39%, ctx=356, majf=0, minf=1 00:10:00.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:00.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.401 issued rwts: total=4503,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.401 job3: (groupid=0, jobs=1): err= 0: pid=1506202: Mon Nov 4 12:14:34 2024 00:10:00.401 read: IOPS=2173, BW=8692KiB/s (8901kB/s)(9092KiB/1046msec) 00:10:00.401 slat (nsec): min=931, max=13905k, avg=210634.59, stdev=1084469.11 00:10:00.401 clat (usec): min=11245, max=77081, avg=27772.39, stdev=11839.70 00:10:00.401 lat (usec): min=12865, max=77086, avg=27983.03, stdev=11850.67 00:10:00.401 clat percentiles (usec): 00:10:00.401 | 1.00th=[13960], 5.00th=[16581], 10.00th=[17957], 20.00th=[18744], 00:10:00.401 | 30.00th=[20055], 40.00th=[22414], 50.00th=[23725], 60.00th=[25822], 00:10:00.401 | 70.00th=[29754], 80.00th=[35390], 90.00th=[44303], 95.00th=[51643], 00:10:00.401 | 99.00th=[73925], 99.50th=[73925], 99.90th=[77071], 99.95th=[77071], 00:10:00.401 | 99.99th=[77071] 00:10:00.401 write: IOPS=2447, BW=9790KiB/s (10.0MB/s)(10.0MiB/1046msec); 0 zone resets 00:10:00.401 slat (nsec): min=1621, max=28718k, avg=198473.83, stdev=1380456.45 00:10:00.401 clat (usec): min=10355, max=87479, avg=25305.53, stdev=16144.78 00:10:00.401 lat (usec): min=13071, max=87489, avg=25504.01, stdev=16238.82 00:10:00.401 clat percentiles (usec): 00:10:00.401 | 1.00th=[12911], 5.00th=[14091], 10.00th=[14615], 20.00th=[14877], 00:10:00.401 | 30.00th=[16188], 40.00th=[17171], 50.00th=[19006], 60.00th=[19792], 00:10:00.401 | 70.00th=[22676], 80.00th=[32637], 90.00th=[52167], 95.00th=[62653], 00:10:00.401 | 99.00th=[87557], 99.50th=[87557], 99.90th=[87557], 99.95th=[87557], 00:10:00.401 | 99.99th=[87557] 00:10:00.401 bw ( KiB/s): min= 8192, max=12288, per=14.55%, avg=10240.00, stdev=2896.31, samples=2 00:10:00.401 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:10:00.401 lat (msec) : 20=46.78%, 50=44.71%, 100=8.50% 00:10:00.401 cpu : usr=2.68%, sys=2.30%, ctx=223, majf=0, minf=2 00:10:00.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:10:00.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:00.401 issued rwts: total=2273,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:00.401 00:10:00.401 Run status group 0 (all jobs): 00:10:00.401 READ: bw=63.5MiB/s (66.6MB/s), 8692KiB/s-23.9MiB/s (8901kB/s-25.1MB/s), io=66.5MiB (69.7MB), run=1002-1046msec 00:10:00.401 WRITE: bw=68.7MiB/s (72.0MB/s), 9790KiB/s-24.4MiB/s (10.0MB/s-25.6MB/s), io=71.9MiB (75.4MB), run=1002-1046msec 00:10:00.401 00:10:00.401 Disk stats (read/write): 00:10:00.401 nvme0n1: ios=3634/4143, merge=0/0, ticks=35217/38215, in_queue=73432, util=87.58% 00:10:00.401 nvme0n2: ios=4698/5120, merge=0/0, ticks=36505/36608, in_queue=73113, util=91.63% 00:10:00.401 nvme0n3: ios=3072/3568, merge=0/0, ticks=20404/28610, in_queue=49014, util=87.96% 00:10:00.401 nvme0n4: ios=1849/2048, merge=0/0, ticks=11839/13771, in_queue=25610, util=95.94% 00:10:00.401 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:00.401 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1506513 00:10:00.401 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:00.401 12:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:00.401 [global] 00:10:00.401 thread=1 00:10:00.401 invalidate=1 00:10:00.401 rw=read 00:10:00.401 time_based=1 00:10:00.401 runtime=10 00:10:00.401 ioengine=libaio 00:10:00.401 direct=1 00:10:00.401 bs=4096 00:10:00.401 iodepth=1 00:10:00.401 norandommap=1 00:10:00.401 numjobs=1 00:10:00.401 00:10:00.401 [job0] 00:10:00.401 filename=/dev/nvme0n1 00:10:00.401 [job1] 00:10:00.401 filename=/dev/nvme0n2 00:10:00.401 [job2] 00:10:00.401 filename=/dev/nvme0n3 00:10:00.401 [job3] 00:10:00.401 filename=/dev/nvme0n4 00:10:00.401 Could not set queue depth (nvme0n1) 00:10:00.401 Could not set queue depth (nvme0n2) 00:10:00.401 Could not set queue depth (nvme0n3) 00:10:00.401 Could not set queue depth (nvme0n4) 00:10:00.665 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.665 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.665 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.665 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.665 fio-3.35 00:10:00.665 Starting 4 threads 00:10:03.205 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:03.466 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:03.466 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=253952, buflen=4096 00:10:03.466 fio: pid=1506744, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:03.466 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=282624, buflen=4096 00:10:03.466 fio: pid=1506736, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:03.466 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:03.466 12:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:03.725 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:03.725 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:03.725 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=299008, buflen=4096 00:10:03.725 fio: pid=1506715, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:03.989 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:03.989 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:03.989 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=311296, buflen=4096 00:10:03.989 fio: pid=1506724, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:03.989 00:10:03.989 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1506715: Mon Nov 4 12:14:38 2024 00:10:03.989 read: IOPS=24, BW=98.3KiB/s (101kB/s)(292KiB/2970msec) 00:10:03.989 slat (usec): min=24, max=237, avg=30.80, stdev=32.77 00:10:03.989 clat (usec): min=742, max=43001, avg=40338.21, stdev=8222.68 00:10:03.989 lat (usec): min=778, max=43026, avg=40369.07, stdev=8222.94 00:10:03.989 clat percentiles (usec): 00:10:03.989 | 1.00th=[ 742], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:10:03.989 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:03.989 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:10:03.989 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:03.989 | 99.99th=[43254] 00:10:03.989 bw ( KiB/s): min= 96, max= 112, per=27.70%, avg=99.20, stdev= 7.16, samples=5 00:10:03.989 iops : min= 24, max= 28, avg=24.80, stdev= 1.79, samples=5 00:10:03.989 lat (usec) : 750=1.35% 00:10:03.989 lat (msec) : 2=2.70%, 50=94.59% 00:10:03.989 cpu : usr=0.00%, sys=0.10%, ctx=76, majf=0, minf=2 00:10:03.989 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.989 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.989 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.989 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.989 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1506724: Mon Nov 4 12:14:38 2024 00:10:03.989 read: IOPS=24, BW=97.0KiB/s (99.3kB/s)(304KiB/3134msec) 00:10:03.989 slat (usec): min=25, max=262, avg=34.23, stdev=40.37 00:10:03.989 clat (usec): min=866, max=42207, avg=40910.56, stdev=4679.49 00:10:03.989 lat (usec): min=910, max=42233, avg=40944.89, stdev=4679.02 00:10:03.989 clat percentiles (usec): 00:10:03.989 | 1.00th=[ 865], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:03.989 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:10:03.989 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:03.989 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:03.989 | 99.99th=[42206] 00:10:03.989 bw ( KiB/s): min= 96, max= 104, per=27.14%, avg=97.50, stdev= 3.21, samples=6 00:10:03.989 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:10:03.989 lat (usec) : 1000=1.30% 00:10:03.989 lat (msec) : 50=97.40% 00:10:03.989 cpu : usr=0.00%, sys=0.10%, ctx=80, majf=0, minf=1 00:10:03.989 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.989 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.989 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.989 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.989 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1506736: Mon Nov 4 12:14:38 2024 00:10:03.989 read: IOPS=25, BW=100.0KiB/s (102kB/s)(276KiB/2761msec) 00:10:03.989 slat (usec): min=25, max=252, avg=30.01, stdev=27.03 00:10:03.989 clat (usec): min=626, max=42857, avg=39664.77, stdev=8381.20 00:10:03.989 lat (usec): min=661, max=42883, avg=39694.83, stdev=8381.15 00:10:03.989 clat percentiles (usec): 00:10:03.989 | 1.00th=[ 627], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:03.989 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:10:03.989 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:03.989 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:03.989 | 99.99th=[42730] 00:10:03.989 bw ( KiB/s): min= 96, max= 104, per=27.98%, avg=100.80, stdev= 4.38, samples=5 00:10:03.989 iops : min= 24, max= 26, avg=25.20, stdev= 1.10, samples=5 00:10:03.989 lat (usec) : 750=2.86%, 1000=1.43% 00:10:03.989 lat (msec) : 50=94.29% 00:10:03.989 cpu : usr=0.00%, sys=0.11%, ctx=71, majf=0, minf=2 00:10:03.989 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.989 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.989 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.989 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.989 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1506744: Mon Nov 4 12:14:38 2024 00:10:03.989 read: IOPS=24, BW=95.5KiB/s (97.8kB/s)(248KiB/2596msec) 00:10:03.989 slat (nsec): min=25625, max=41052, avg=26462.29, stdev=1880.51 00:10:03.989 clat (usec): min=40850, max=42979, avg=41489.12, stdev=542.39 00:10:03.990 lat (usec): min=40877, max=43005, avg=41515.58, stdev=542.30 00:10:03.990 clat percentiles (usec): 00:10:03.990 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:03.990 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:10:03.990 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:03.990 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:03.990 | 99.99th=[42730] 00:10:03.990 bw ( KiB/s): min= 96, max= 96, per=26.86%, avg=96.00, stdev= 0.00, samples=5 00:10:03.990 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:10:03.990 lat (msec) : 50=98.41% 00:10:03.990 cpu : usr=0.00%, sys=0.12%, ctx=63, majf=0, minf=1 00:10:03.990 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.990 complete : 0=1.6%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.990 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.990 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.990 00:10:03.990 Run status group 0 (all jobs): 00:10:03.990 READ: bw=357KiB/s (366kB/s), 95.5KiB/s-100.0KiB/s (97.8kB/s-102kB/s), io=1120KiB (1147kB), run=2596-3134msec 00:10:03.990 00:10:03.990 Disk stats (read/write): 00:10:03.990 nvme0n1: ios=70/0, merge=0/0, ticks=2821/0, in_queue=2821, util=94.46% 00:10:03.990 nvme0n2: ios=75/0, merge=0/0, ticks=3070/0, in_queue=3070, util=95.62% 00:10:03.990 nvme0n3: ios=65/0, merge=0/0, ticks=2574/0, in_queue=2574, util=96.02% 00:10:03.990 nvme0n4: ios=62/0, merge=0/0, ticks=2574/0, in_queue=2574, util=96.41% 00:10:03.990 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:03.990 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:04.252 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:04.252 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:04.544 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:04.544 12:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:04.544 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:04.544 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:04.806 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:04.806 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1506513 00:10:04.806 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:04.806 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:04.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.806 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:04.806 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:04.806 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:04.806 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.806 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:04.806 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.806 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:04.806 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:04.806 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:04.806 nvmf hotplug test: fio failed as expected 00:10:04.806 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.067 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:05.067 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:05.068 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:05.068 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:05.068 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:05.068 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:05.068 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:05.068 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:05.068 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:05.068 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:05.068 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:05.068 rmmod nvme_tcp 00:10:05.068 rmmod nvme_fabrics 00:10:05.068 rmmod nvme_keyring 00:10:05.068 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:05.068 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:05.068 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:05.068 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1502986 ']' 00:10:05.068 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1502986 00:10:05.068 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1502986 ']' 00:10:05.068 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1502986 00:10:05.068 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:05.068 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:05.068 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1502986 00:10:05.329 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:05.329 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:05.329 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1502986' 00:10:05.329 killing process with pid 1502986 00:10:05.329 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1502986 00:10:05.329 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1502986 00:10:05.329 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:05.329 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:05.329 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:05.329 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:05.329 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:10:05.329 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:05.329 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:10:05.329 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:05.329 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:05.329 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.329 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.329 12:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.873 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:07.873 00:10:07.873 real 0m28.543s 00:10:07.873 user 2m35.796s 00:10:07.873 sys 0m8.764s 00:10:07.873 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.873 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.873 ************************************ 00:10:07.873 END TEST nvmf_fio_target 00:10:07.873 ************************************ 00:10:07.873 12:14:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:07.873 12:14:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:07.873 12:14:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.873 12:14:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.873 ************************************ 00:10:07.873 START TEST nvmf_bdevio 00:10:07.873 ************************************ 00:10:07.873 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:07.873 * Looking for test storage... 00:10:07.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:07.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.873 --rc genhtml_branch_coverage=1 00:10:07.873 --rc genhtml_function_coverage=1 00:10:07.873 --rc genhtml_legend=1 00:10:07.873 --rc geninfo_all_blocks=1 00:10:07.873 --rc geninfo_unexecuted_blocks=1 00:10:07.873 00:10:07.873 ' 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:07.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.873 --rc genhtml_branch_coverage=1 00:10:07.873 --rc genhtml_function_coverage=1 00:10:07.873 --rc genhtml_legend=1 00:10:07.873 --rc geninfo_all_blocks=1 00:10:07.873 --rc geninfo_unexecuted_blocks=1 00:10:07.873 00:10:07.873 ' 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:07.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.873 --rc genhtml_branch_coverage=1 00:10:07.873 --rc genhtml_function_coverage=1 00:10:07.873 --rc genhtml_legend=1 00:10:07.873 --rc geninfo_all_blocks=1 00:10:07.873 --rc geninfo_unexecuted_blocks=1 00:10:07.873 00:10:07.873 ' 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:07.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.873 --rc genhtml_branch_coverage=1 00:10:07.873 --rc genhtml_function_coverage=1 00:10:07.873 --rc genhtml_legend=1 00:10:07.873 --rc geninfo_all_blocks=1 00:10:07.873 --rc geninfo_unexecuted_blocks=1 00:10:07.873 00:10:07.873 ' 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:07.873 12:14:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:16.015 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:16.015 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:16.015 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:16.015 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.015 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:16.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:10:16.016 00:10:16.016 --- 10.0.0.2 ping statistics --- 00:10:16.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.016 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:10:16.016 00:10:16.016 --- 10.0.0.1 ping statistics --- 00:10:16.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.016 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1512057 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1512057 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1512057 ']' 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.016 12:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.016 [2024-11-04 12:14:49.561173] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:10:16.016 [2024-11-04 12:14:49.561254] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.016 [2024-11-04 12:14:49.650618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:16.016 [2024-11-04 12:14:49.691969] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.016 [2024-11-04 12:14:49.692011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.016 [2024-11-04 12:14:49.692019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.016 [2024-11-04 12:14:49.692026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.016 [2024-11-04 12:14:49.692032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.016 [2024-11-04 12:14:49.694022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:16.016 [2024-11-04 12:14:49.694177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:16.016 [2024-11-04 12:14:49.694337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.016 [2024-11-04 12:14:49.694338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.016 [2024-11-04 12:14:50.420741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.016 Malloc0 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:16.016 [2024-11-04 12:14:50.496708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:16.016 { 00:10:16.016 "params": { 00:10:16.016 "name": "Nvme$subsystem", 00:10:16.016 "trtype": "$TEST_TRANSPORT", 00:10:16.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:16.016 "adrfam": "ipv4", 00:10:16.016 "trsvcid": "$NVMF_PORT", 00:10:16.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:16.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:16.016 "hdgst": ${hdgst:-false}, 00:10:16.016 "ddgst": ${ddgst:-false} 00:10:16.016 }, 00:10:16.016 "method": "bdev_nvme_attach_controller" 00:10:16.016 } 00:10:16.016 EOF 00:10:16.016 )") 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:10:16.016 12:14:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:16.016 "params": { 00:10:16.016 "name": "Nvme1", 00:10:16.016 "trtype": "tcp", 00:10:16.016 "traddr": "10.0.0.2", 00:10:16.016 "adrfam": "ipv4", 00:10:16.016 "trsvcid": "4420", 00:10:16.016 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:16.016 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:16.016 "hdgst": false, 00:10:16.016 "ddgst": false 00:10:16.016 }, 00:10:16.016 "method": "bdev_nvme_attach_controller" 00:10:16.016 }' 00:10:16.016 [2024-11-04 12:14:50.564843] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:10:16.016 [2024-11-04 12:14:50.564921] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512117 ] 00:10:16.277 [2024-11-04 12:14:50.631672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:16.277 [2024-11-04 12:14:50.679073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.277 [2024-11-04 12:14:50.679181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.277 [2024-11-04 12:14:50.679177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.538 I/O targets: 00:10:16.538 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:16.538 00:10:16.538 00:10:16.538 CUnit - A unit testing framework for C - Version 2.1-3 00:10:16.538 http://cunit.sourceforge.net/ 00:10:16.538 00:10:16.538 00:10:16.538 Suite: bdevio tests on: Nvme1n1 00:10:16.538 Test: blockdev write read block ...passed 00:10:16.538 Test: blockdev write zeroes read block ...passed 00:10:16.538 Test: blockdev write zeroes read no split ...passed 00:10:16.538 Test: blockdev write zeroes read split ...passed 00:10:16.538 Test: blockdev write zeroes read split partial ...passed 00:10:16.538 Test: blockdev reset ...[2024-11-04 12:14:50.998419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:16.538 [2024-11-04 12:14:50.998483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177e0d0 (9): Bad file descriptor 00:10:16.538 [2024-11-04 12:14:51.106780] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:16.538 passed 00:10:16.799 Test: blockdev write read 8 blocks ...passed 00:10:16.799 Test: blockdev write read size > 128k ...passed 00:10:16.799 Test: blockdev write read invalid size ...passed 00:10:16.799 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:16.799 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:16.799 Test: blockdev write read max offset ...passed 00:10:16.799 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:16.799 Test: blockdev writev readv 8 blocks ...passed 00:10:16.799 Test: blockdev writev readv 30 x 1block ...passed 00:10:16.799 Test: blockdev writev readv block ...passed 00:10:16.799 Test: blockdev writev readv size > 128k ...passed 00:10:16.799 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:16.799 Test: blockdev comparev and writev ...[2024-11-04 12:14:51.331282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.799 [2024-11-04 12:14:51.331308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:16.799 [2024-11-04 12:14:51.331319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.800 [2024-11-04 12:14:51.331326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:16.800 [2024-11-04 12:14:51.331755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.800 [2024-11-04 12:14:51.331764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:16.800 [2024-11-04 12:14:51.331775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.800 [2024-11-04 12:14:51.331781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:16.800 [2024-11-04 12:14:51.332234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.800 [2024-11-04 12:14:51.332243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:16.800 [2024-11-04 12:14:51.332257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.800 [2024-11-04 12:14:51.332262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:16.800 [2024-11-04 12:14:51.332753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.800 [2024-11-04 12:14:51.332762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:16.800 [2024-11-04 12:14:51.332772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:16.800 [2024-11-04 12:14:51.332777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:17.061 passed 00:10:17.061 Test: blockdev nvme passthru rw ...passed 00:10:17.061 Test: blockdev nvme passthru vendor specific ...[2024-11-04 12:14:51.416519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.061 [2024-11-04 12:14:51.416532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:17.061 [2024-11-04 12:14:51.416856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.061 [2024-11-04 12:14:51.416864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:17.062 [2024-11-04 12:14:51.417196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.062 [2024-11-04 12:14:51.417204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:17.062 [2024-11-04 12:14:51.417529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.062 [2024-11-04 12:14:51.417539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:17.062 passed 00:10:17.062 Test: blockdev nvme admin passthru ...passed 00:10:17.062 Test: blockdev copy ...passed 00:10:17.062 00:10:17.062 Run Summary: Type Total Ran Passed Failed Inactive 00:10:17.062 suites 1 1 n/a 0 0 00:10:17.062 tests 23 23 23 0 0 00:10:17.062 asserts 152 152 152 0 n/a 00:10:17.062 00:10:17.062 Elapsed time = 1.212 seconds 00:10:17.062 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.062 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.062 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:17.062 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.062 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:17.062 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:17.062 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:17.062 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:17.062 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:17.062 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:17.062 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.062 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:17.062 rmmod nvme_tcp 00:10:17.062 rmmod nvme_fabrics 00:10:17.062 rmmod nvme_keyring 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1512057 ']' 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1512057 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1512057 ']' 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1512057 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1512057 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1512057' 00:10:17.323 killing process with pid 1512057 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1512057 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1512057 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:17.323 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:17.324 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:17.324 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.324 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.324 12:14:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.874 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:19.874 00:10:19.874 real 0m11.958s 00:10:19.874 user 0m12.860s 00:10:19.874 sys 0m6.057s 00:10:19.874 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.874 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 ************************************ 00:10:19.874 END TEST nvmf_bdevio 00:10:19.874 ************************************ 00:10:19.874 12:14:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:19.874 00:10:19.874 real 4m59.408s 00:10:19.874 user 11m46.299s 00:10:19.874 sys 1m47.649s 00:10:19.874 12:14:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.874 12:14:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 ************************************ 00:10:19.874 END TEST nvmf_target_core 00:10:19.874 ************************************ 00:10:19.874 12:14:54 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:19.874 12:14:54 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:19.874 12:14:54 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.874 12:14:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 ************************************ 00:10:19.874 START TEST nvmf_target_extra 00:10:19.874 ************************************ 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:19.874 * Looking for test storage... 00:10:19.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:19.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.874 --rc genhtml_branch_coverage=1 00:10:19.874 --rc genhtml_function_coverage=1 00:10:19.874 --rc genhtml_legend=1 00:10:19.874 --rc geninfo_all_blocks=1 00:10:19.874 --rc geninfo_unexecuted_blocks=1 00:10:19.874 00:10:19.874 ' 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:19.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.874 --rc genhtml_branch_coverage=1 00:10:19.874 --rc genhtml_function_coverage=1 00:10:19.874 --rc genhtml_legend=1 00:10:19.874 --rc geninfo_all_blocks=1 00:10:19.874 --rc geninfo_unexecuted_blocks=1 00:10:19.874 00:10:19.874 ' 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:19.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.874 --rc genhtml_branch_coverage=1 00:10:19.874 --rc genhtml_function_coverage=1 00:10:19.874 --rc genhtml_legend=1 00:10:19.874 --rc geninfo_all_blocks=1 00:10:19.874 --rc geninfo_unexecuted_blocks=1 00:10:19.874 00:10:19.874 ' 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:19.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.874 --rc genhtml_branch_coverage=1 00:10:19.874 --rc genhtml_function_coverage=1 00:10:19.874 --rc genhtml_legend=1 00:10:19.874 --rc geninfo_all_blocks=1 00:10:19.874 --rc geninfo_unexecuted_blocks=1 00:10:19.874 00:10:19.874 ' 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.874 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:19.875 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.875 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.875 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.875 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.875 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.875 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.875 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.875 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.875 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.875 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:19.875 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:19.875 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:19.875 12:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:19.875 12:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:19.875 12:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.875 12:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:19.875 ************************************ 00:10:19.875 START TEST nvmf_example 00:10:19.875 ************************************ 00:10:19.875 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:20.137 * Looking for test storage... 00:10:20.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:20.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.137 --rc genhtml_branch_coverage=1 00:10:20.137 --rc genhtml_function_coverage=1 00:10:20.137 --rc genhtml_legend=1 00:10:20.137 --rc geninfo_all_blocks=1 00:10:20.137 --rc geninfo_unexecuted_blocks=1 00:10:20.137 00:10:20.137 ' 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:20.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.137 --rc genhtml_branch_coverage=1 00:10:20.137 --rc genhtml_function_coverage=1 00:10:20.137 --rc genhtml_legend=1 00:10:20.137 --rc geninfo_all_blocks=1 00:10:20.137 --rc geninfo_unexecuted_blocks=1 00:10:20.137 00:10:20.137 ' 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:20.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.137 --rc genhtml_branch_coverage=1 00:10:20.137 --rc genhtml_function_coverage=1 00:10:20.137 --rc genhtml_legend=1 00:10:20.137 --rc geninfo_all_blocks=1 00:10:20.137 --rc geninfo_unexecuted_blocks=1 00:10:20.137 00:10:20.137 ' 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:20.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.137 --rc genhtml_branch_coverage=1 00:10:20.137 --rc genhtml_function_coverage=1 00:10:20.137 --rc genhtml_legend=1 00:10:20.137 --rc geninfo_all_blocks=1 00:10:20.137 --rc geninfo_unexecuted_blocks=1 00:10:20.137 00:10:20.137 ' 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.137 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:20.138 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.283 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:28.283 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:28.283 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:28.283 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:28.283 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:28.283 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:28.283 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:28.283 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:28.283 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:28.283 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:28.283 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:28.283 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:28.283 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:28.283 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:28.283 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:28.283 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:28.283 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:28.283 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:28.283 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:28.284 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:28.284 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:28.284 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:28.284 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:28.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:10:28.284 00:10:28.284 --- 10.0.0.2 ping statistics --- 00:10:28.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.284 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:28.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:10:28.284 00:10:28.284 --- 10.0.0.1 ping statistics --- 00:10:28.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.284 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1516916 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1516916 00:10:28.284 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1516916 ']' 00:10:28.285 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.285 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:28.285 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.285 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:28.285 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.285 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:28.285 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:28.285 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:28.285 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:28.285 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:28.545 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:38.668 Initializing NVMe Controllers 00:10:38.668 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:38.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:38.668 Initialization complete. Launching workers. 00:10:38.668 ======================================================== 00:10:38.668 Latency(us) 00:10:38.668 Device Information : IOPS MiB/s Average min max 00:10:38.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18653.17 72.86 3430.65 616.69 15293.92 00:10:38.668 ======================================================== 00:10:38.668 Total : 18653.17 72.86 3430.65 616.69 15293.92 00:10:38.668 00:10:38.668 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:38.668 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:38.668 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:38.668 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:38.668 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:38.668 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:38.668 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:38.668 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:38.668 rmmod nvme_tcp 00:10:38.668 rmmod nvme_fabrics 00:10:38.668 rmmod nvme_keyring 00:10:38.669 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:38.669 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:38.669 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:38.669 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 1516916 ']' 00:10:38.669 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 1516916 00:10:38.669 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1516916 ']' 00:10:38.669 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1516916 00:10:38.669 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:38.669 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:38.669 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1516916 00:10:38.929 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:38.929 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:38.929 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1516916' 00:10:38.929 killing process with pid 1516916 00:10:38.929 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1516916 00:10:38.929 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1516916 00:10:38.929 nvmf threads initialize successfully 00:10:38.929 bdev subsystem init successfully 00:10:38.929 created a nvmf target service 00:10:38.929 create targets's poll groups done 00:10:38.929 all subsystems of target started 00:10:38.929 nvmf target is running 00:10:38.929 all subsystems of target stopped 00:10:38.929 destroy targets's poll groups done 00:10:38.929 destroyed the nvmf target service 00:10:38.929 bdev subsystem finish successfully 00:10:38.929 nvmf threads destroy successfully 00:10:38.929 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:38.929 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:38.929 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:38.929 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:38.929 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:10:38.929 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:38.929 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:10:38.929 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:38.929 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:38.929 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.929 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.929 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.477 00:10:41.477 real 0m21.149s 00:10:41.477 user 0m46.243s 00:10:41.477 sys 0m6.836s 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.477 ************************************ 00:10:41.477 END TEST nvmf_example 00:10:41.477 ************************************ 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:41.477 ************************************ 00:10:41.477 START TEST nvmf_filesystem 00:10:41.477 ************************************ 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:41.477 * Looking for test storage... 00:10:41.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:41.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.477 --rc genhtml_branch_coverage=1 00:10:41.477 --rc genhtml_function_coverage=1 00:10:41.477 --rc genhtml_legend=1 00:10:41.477 --rc geninfo_all_blocks=1 00:10:41.477 --rc geninfo_unexecuted_blocks=1 00:10:41.477 00:10:41.477 ' 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:41.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.477 --rc genhtml_branch_coverage=1 00:10:41.477 --rc genhtml_function_coverage=1 00:10:41.477 --rc genhtml_legend=1 00:10:41.477 --rc geninfo_all_blocks=1 00:10:41.477 --rc geninfo_unexecuted_blocks=1 00:10:41.477 00:10:41.477 ' 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:41.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.477 --rc genhtml_branch_coverage=1 00:10:41.477 --rc genhtml_function_coverage=1 00:10:41.477 --rc genhtml_legend=1 00:10:41.477 --rc geninfo_all_blocks=1 00:10:41.477 --rc geninfo_unexecuted_blocks=1 00:10:41.477 00:10:41.477 ' 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:41.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.477 --rc genhtml_branch_coverage=1 00:10:41.477 --rc genhtml_function_coverage=1 00:10:41.477 --rc genhtml_legend=1 00:10:41.477 --rc geninfo_all_blocks=1 00:10:41.477 --rc geninfo_unexecuted_blocks=1 00:10:41.477 00:10:41.477 ' 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:41.477 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:41.478 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:41.479 #define SPDK_CONFIG_H 00:10:41.479 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:41.479 #define SPDK_CONFIG_APPS 1 00:10:41.479 #define SPDK_CONFIG_ARCH native 00:10:41.479 #undef SPDK_CONFIG_ASAN 00:10:41.479 #undef SPDK_CONFIG_AVAHI 00:10:41.479 #undef SPDK_CONFIG_CET 00:10:41.479 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:41.479 #define SPDK_CONFIG_COVERAGE 1 00:10:41.479 #define SPDK_CONFIG_CROSS_PREFIX 00:10:41.479 #undef SPDK_CONFIG_CRYPTO 00:10:41.479 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:41.479 #undef SPDK_CONFIG_CUSTOMOCF 00:10:41.479 #undef SPDK_CONFIG_DAOS 00:10:41.479 #define SPDK_CONFIG_DAOS_DIR 00:10:41.479 #define SPDK_CONFIG_DEBUG 1 00:10:41.479 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:41.479 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:41.479 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:41.479 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:41.479 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:41.479 #undef SPDK_CONFIG_DPDK_UADK 00:10:41.479 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:41.479 #define SPDK_CONFIG_EXAMPLES 1 00:10:41.479 #undef SPDK_CONFIG_FC 00:10:41.479 #define SPDK_CONFIG_FC_PATH 00:10:41.479 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:41.479 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:41.479 #define SPDK_CONFIG_FSDEV 1 00:10:41.479 #undef SPDK_CONFIG_FUSE 00:10:41.479 #undef SPDK_CONFIG_FUZZER 00:10:41.479 #define SPDK_CONFIG_FUZZER_LIB 00:10:41.479 #undef SPDK_CONFIG_GOLANG 00:10:41.479 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:41.479 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:41.479 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:41.479 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:41.479 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:41.479 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:41.479 #undef SPDK_CONFIG_HAVE_LZ4 00:10:41.479 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:41.479 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:41.479 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:41.479 #define SPDK_CONFIG_IDXD 1 00:10:41.479 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:41.479 #undef SPDK_CONFIG_IPSEC_MB 00:10:41.479 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:41.479 #define SPDK_CONFIG_ISAL 1 00:10:41.479 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:41.479 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:41.479 #define SPDK_CONFIG_LIBDIR 00:10:41.479 #undef SPDK_CONFIG_LTO 00:10:41.479 #define SPDK_CONFIG_MAX_LCORES 128 00:10:41.479 #define SPDK_CONFIG_NVME_CUSE 1 00:10:41.479 #undef SPDK_CONFIG_OCF 00:10:41.479 #define SPDK_CONFIG_OCF_PATH 00:10:41.479 #define SPDK_CONFIG_OPENSSL_PATH 00:10:41.479 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:41.479 #define SPDK_CONFIG_PGO_DIR 00:10:41.479 #undef SPDK_CONFIG_PGO_USE 00:10:41.479 #define SPDK_CONFIG_PREFIX /usr/local 00:10:41.479 #undef SPDK_CONFIG_RAID5F 00:10:41.479 #undef SPDK_CONFIG_RBD 00:10:41.479 #define SPDK_CONFIG_RDMA 1 00:10:41.479 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:41.479 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:41.479 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:41.479 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:41.479 #define SPDK_CONFIG_SHARED 1 00:10:41.479 #undef SPDK_CONFIG_SMA 00:10:41.479 #define SPDK_CONFIG_TESTS 1 00:10:41.479 #undef SPDK_CONFIG_TSAN 00:10:41.479 #define SPDK_CONFIG_UBLK 1 00:10:41.479 #define SPDK_CONFIG_UBSAN 1 00:10:41.479 #undef SPDK_CONFIG_UNIT_TESTS 00:10:41.479 #undef SPDK_CONFIG_URING 00:10:41.479 #define SPDK_CONFIG_URING_PATH 00:10:41.479 #undef SPDK_CONFIG_URING_ZNS 00:10:41.479 #undef SPDK_CONFIG_USDT 00:10:41.479 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:41.479 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:41.479 #define SPDK_CONFIG_VFIO_USER 1 00:10:41.479 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:41.479 #define SPDK_CONFIG_VHOST 1 00:10:41.479 #define SPDK_CONFIG_VIRTIO 1 00:10:41.479 #undef SPDK_CONFIG_VTUNE 00:10:41.479 #define SPDK_CONFIG_VTUNE_DIR 00:10:41.479 #define SPDK_CONFIG_WERROR 1 00:10:41.479 #define SPDK_CONFIG_WPDK_DIR 00:10:41.479 #undef SPDK_CONFIG_XNVME 00:10:41.479 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:41.479 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:41.480 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:41.481 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1520183 ]] 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1520183 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.3znR1x 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.3znR1x/tests/target /tmp/spdk.3znR1x 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=122581024768 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356541952 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6775517184 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668237824 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678268928 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847947264 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871310848 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23363584 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=216064 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=287744 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677720064 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678273024 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=552960 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:41.482 * Looking for test storage... 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=122581024768 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8990109696 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:41.482 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:41.483 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:41.483 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:41.483 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:41.483 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:41.483 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:41.483 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:41.483 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:41.483 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:41.483 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:41.483 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:41.483 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.483 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:41.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.744 --rc genhtml_branch_coverage=1 00:10:41.744 --rc genhtml_function_coverage=1 00:10:41.744 --rc genhtml_legend=1 00:10:41.744 --rc geninfo_all_blocks=1 00:10:41.744 --rc geninfo_unexecuted_blocks=1 00:10:41.744 00:10:41.744 ' 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:41.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.744 --rc genhtml_branch_coverage=1 00:10:41.744 --rc genhtml_function_coverage=1 00:10:41.744 --rc genhtml_legend=1 00:10:41.744 --rc geninfo_all_blocks=1 00:10:41.744 --rc geninfo_unexecuted_blocks=1 00:10:41.744 00:10:41.744 ' 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:41.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.744 --rc genhtml_branch_coverage=1 00:10:41.744 --rc genhtml_function_coverage=1 00:10:41.744 --rc genhtml_legend=1 00:10:41.744 --rc geninfo_all_blocks=1 00:10:41.744 --rc geninfo_unexecuted_blocks=1 00:10:41.744 00:10:41.744 ' 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:41.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.744 --rc genhtml_branch_coverage=1 00:10:41.744 --rc genhtml_function_coverage=1 00:10:41.744 --rc genhtml_legend=1 00:10:41.744 --rc geninfo_all_blocks=1 00:10:41.744 --rc geninfo_unexecuted_blocks=1 00:10:41.744 00:10:41.744 ' 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.744 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:41.745 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:49.887 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:49.887 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.887 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:49.888 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:49.888 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:49.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:10:49.888 00:10:49.888 --- 10.0.0.2 ping statistics --- 00:10:49.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.888 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:10:49.888 00:10:49.888 --- 10.0.0.1 ping statistics --- 00:10:49.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.888 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:49.888 ************************************ 00:10:49.888 START TEST nvmf_filesystem_no_in_capsule 00:10:49.888 ************************************ 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1523977 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1523977 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1523977 ']' 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:49.888 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.888 [2024-11-04 12:15:23.641558] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:10:49.888 [2024-11-04 12:15:23.641625] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.888 [2024-11-04 12:15:23.713593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.888 [2024-11-04 12:15:23.756912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.888 [2024-11-04 12:15:23.756951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.888 [2024-11-04 12:15:23.756959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.888 [2024-11-04 12:15:23.756967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.888 [2024-11-04 12:15:23.756973] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.888 [2024-11-04 12:15:23.758808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.888 [2024-11-04 12:15:23.759074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.888 [2024-11-04 12:15:23.759235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.888 [2024-11-04 12:15:23.759236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.888 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.888 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:49.889 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:49.889 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:49.889 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.149 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.149 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:50.149 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:50.149 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.149 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.149 [2024-11-04 12:15:24.490365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.149 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.149 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:50.149 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.150 Malloc1 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.150 [2024-11-04 12:15:24.621999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:50.150 { 00:10:50.150 "name": "Malloc1", 00:10:50.150 "aliases": [ 00:10:50.150 "8bb7719b-841e-479f-b7b2-9476a5214d6d" 00:10:50.150 ], 00:10:50.150 "product_name": "Malloc disk", 00:10:50.150 "block_size": 512, 00:10:50.150 "num_blocks": 1048576, 00:10:50.150 "uuid": "8bb7719b-841e-479f-b7b2-9476a5214d6d", 00:10:50.150 "assigned_rate_limits": { 00:10:50.150 "rw_ios_per_sec": 0, 00:10:50.150 "rw_mbytes_per_sec": 0, 00:10:50.150 "r_mbytes_per_sec": 0, 00:10:50.150 "w_mbytes_per_sec": 0 00:10:50.150 }, 00:10:50.150 "claimed": true, 00:10:50.150 "claim_type": "exclusive_write", 00:10:50.150 "zoned": false, 00:10:50.150 "supported_io_types": { 00:10:50.150 "read": true, 00:10:50.150 "write": true, 00:10:50.150 "unmap": true, 00:10:50.150 "flush": true, 00:10:50.150 "reset": true, 00:10:50.150 "nvme_admin": false, 00:10:50.150 "nvme_io": false, 00:10:50.150 "nvme_io_md": false, 00:10:50.150 "write_zeroes": true, 00:10:50.150 "zcopy": true, 00:10:50.150 "get_zone_info": false, 00:10:50.150 "zone_management": false, 00:10:50.150 "zone_append": false, 00:10:50.150 "compare": false, 00:10:50.150 "compare_and_write": false, 00:10:50.150 "abort": true, 00:10:50.150 "seek_hole": false, 00:10:50.150 "seek_data": false, 00:10:50.150 "copy": true, 00:10:50.150 "nvme_iov_md": false 00:10:50.150 }, 00:10:50.150 "memory_domains": [ 00:10:50.150 { 00:10:50.150 "dma_device_id": "system", 00:10:50.150 "dma_device_type": 1 00:10:50.150 }, 00:10:50.150 { 00:10:50.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.150 "dma_device_type": 2 00:10:50.150 } 00:10:50.150 ], 00:10:50.150 "driver_specific": {} 00:10:50.150 } 00:10:50.150 ]' 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:50.150 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:50.410 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:50.410 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:50.410 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:50.410 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:50.410 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:51.796 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:51.796 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:51.796 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:51.796 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:51.796 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:53.710 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:53.710 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:53.710 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:53.710 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:53.710 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:53.710 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:53.710 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:53.710 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:53.710 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:53.710 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:53.710 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:53.710 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:53.710 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:53.710 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:53.710 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:53.710 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:53.971 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:54.232 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:54.232 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:55.174 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:55.174 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:55.174 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:55.174 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:55.174 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.174 ************************************ 00:10:55.174 START TEST filesystem_ext4 00:10:55.174 ************************************ 00:10:55.174 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:55.174 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:55.175 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:55.175 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:55.175 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:55.175 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:55.175 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:55.175 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:55.175 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:55.175 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:55.175 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:55.175 mke2fs 1.47.0 (5-Feb-2023) 00:10:55.435 Discarding device blocks: 0/522240 done 00:10:55.435 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:55.435 Filesystem UUID: d544a414-b731-4b32-8975-9cbe2f5089db 00:10:55.435 Superblock backups stored on blocks: 00:10:55.435 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:55.435 00:10:55.435 Allocating group tables: 0/64 done 00:10:55.435 Writing inode tables: 0/64 done 00:10:55.696 Creating journal (8192 blocks): done 00:10:57.577 Writing superblocks and filesystem accounting information: 0/64 done 00:10:57.577 00:10:57.577 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:57.577 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:02.862 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:02.862 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:02.862 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:02.862 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:02.862 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:02.862 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1523977 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:03.123 00:11:03.123 real 0m7.754s 00:11:03.123 user 0m0.033s 00:11:03.123 sys 0m0.074s 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:03.123 ************************************ 00:11:03.123 END TEST filesystem_ext4 00:11:03.123 ************************************ 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.123 ************************************ 00:11:03.123 START TEST filesystem_btrfs 00:11:03.123 ************************************ 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:03.123 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:03.383 btrfs-progs v6.8.1 00:11:03.383 See https://btrfs.readthedocs.io for more information. 00:11:03.383 00:11:03.383 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:03.383 NOTE: several default settings have changed in version 5.15, please make sure 00:11:03.383 this does not affect your deployments: 00:11:03.383 - DUP for metadata (-m dup) 00:11:03.383 - enabled no-holes (-O no-holes) 00:11:03.383 - enabled free-space-tree (-R free-space-tree) 00:11:03.383 00:11:03.383 Label: (null) 00:11:03.383 UUID: c4a02b4b-247e-48c3-b581-687bd24267c7 00:11:03.383 Node size: 16384 00:11:03.383 Sector size: 4096 (CPU page size: 4096) 00:11:03.383 Filesystem size: 510.00MiB 00:11:03.383 Block group profiles: 00:11:03.383 Data: single 8.00MiB 00:11:03.383 Metadata: DUP 32.00MiB 00:11:03.383 System: DUP 8.00MiB 00:11:03.383 SSD detected: yes 00:11:03.383 Zoned device: no 00:11:03.383 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:03.383 Checksum: crc32c 00:11:03.383 Number of devices: 1 00:11:03.383 Devices: 00:11:03.383 ID SIZE PATH 00:11:03.383 1 510.00MiB /dev/nvme0n1p1 00:11:03.383 00:11:03.383 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:03.383 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:04.324 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:04.324 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:04.324 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:04.324 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:04.324 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:04.324 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:04.324 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1523977 00:11:04.324 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:04.324 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:04.324 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:04.324 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:04.324 00:11:04.324 real 0m1.147s 00:11:04.324 user 0m0.030s 00:11:04.324 sys 0m0.119s 00:11:04.324 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.324 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:04.324 ************************************ 00:11:04.324 END TEST filesystem_btrfs 00:11:04.324 ************************************ 00:11:04.324 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:04.325 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:04.325 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.325 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.325 ************************************ 00:11:04.325 START TEST filesystem_xfs 00:11:04.325 ************************************ 00:11:04.325 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:04.325 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:04.325 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:04.325 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:04.325 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:04.325 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:04.325 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:04.325 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:04.325 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:04.325 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:04.325 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:04.325 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:04.325 = sectsz=512 attr=2, projid32bit=1 00:11:04.325 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:04.325 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:04.325 data = bsize=4096 blocks=130560, imaxpct=25 00:11:04.325 = sunit=0 swidth=0 blks 00:11:04.325 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:04.325 log =internal log bsize=4096 blocks=16384, version=2 00:11:04.325 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:04.325 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:05.273 Discarding blocks...Done. 00:11:05.273 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:05.273 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:07.818 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:07.818 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:07.818 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:07.818 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:07.818 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:07.818 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:07.818 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1523977 00:11:07.818 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:07.818 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:07.818 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:07.818 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:07.818 00:11:07.818 real 0m3.557s 00:11:07.818 user 0m0.037s 00:11:07.818 sys 0m0.069s 00:11:07.818 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.818 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:07.818 ************************************ 00:11:07.818 END TEST filesystem_xfs 00:11:07.818 ************************************ 00:11:07.818 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:08.078 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:08.078 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.078 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:08.078 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:08.078 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:08.078 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.078 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:08.078 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.078 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:08.339 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:08.339 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.339 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.339 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.339 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:08.339 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1523977 00:11:08.339 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1523977 ']' 00:11:08.339 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1523977 00:11:08.339 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:08.339 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:08.339 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1523977 00:11:08.340 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:08.340 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:08.340 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1523977' 00:11:08.340 killing process with pid 1523977 00:11:08.340 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1523977 00:11:08.340 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1523977 00:11:08.600 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:08.600 00:11:08.600 real 0m19.371s 00:11:08.600 user 1m16.578s 00:11:08.600 sys 0m1.458s 00:11:08.600 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.600 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.600 ************************************ 00:11:08.600 END TEST nvmf_filesystem_no_in_capsule 00:11:08.600 ************************************ 00:11:08.600 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:08.600 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:08.600 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.600 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:08.600 ************************************ 00:11:08.600 START TEST nvmf_filesystem_in_capsule 00:11:08.600 ************************************ 00:11:08.600 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:08.600 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:08.600 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:08.600 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:08.600 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.600 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.600 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1528061 00:11:08.600 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1528061 00:11:08.600 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:08.600 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1528061 ']' 00:11:08.600 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.600 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:08.600 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.600 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:08.600 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.600 [2024-11-04 12:15:43.089922] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:11:08.600 [2024-11-04 12:15:43.089971] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.600 [2024-11-04 12:15:43.155872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.861 [2024-11-04 12:15:43.192061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.861 [2024-11-04 12:15:43.192094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.861 [2024-11-04 12:15:43.192102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.861 [2024-11-04 12:15:43.192109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.861 [2024-11-04 12:15:43.192114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.861 [2024-11-04 12:15:43.193822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.861 [2024-11-04 12:15:43.194099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.861 [2024-11-04 12:15:43.194259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.861 [2024-11-04 12:15:43.194259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.431 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.431 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:09.431 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:09.431 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:09.431 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.431 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.431 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:09.431 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:09.431 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.431 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.431 [2024-11-04 12:15:43.935820] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.431 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.431 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:09.431 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.431 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.692 Malloc1 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.692 [2024-11-04 12:15:44.061221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:09.692 { 00:11:09.692 "name": "Malloc1", 00:11:09.692 "aliases": [ 00:11:09.692 "1d13e2fd-2e3a-4ac3-9232-6c9c12cb53b8" 00:11:09.692 ], 00:11:09.692 "product_name": "Malloc disk", 00:11:09.692 "block_size": 512, 00:11:09.692 "num_blocks": 1048576, 00:11:09.692 "uuid": "1d13e2fd-2e3a-4ac3-9232-6c9c12cb53b8", 00:11:09.692 "assigned_rate_limits": { 00:11:09.692 "rw_ios_per_sec": 0, 00:11:09.692 "rw_mbytes_per_sec": 0, 00:11:09.692 "r_mbytes_per_sec": 0, 00:11:09.692 "w_mbytes_per_sec": 0 00:11:09.692 }, 00:11:09.692 "claimed": true, 00:11:09.692 "claim_type": "exclusive_write", 00:11:09.692 "zoned": false, 00:11:09.692 "supported_io_types": { 00:11:09.692 "read": true, 00:11:09.692 "write": true, 00:11:09.692 "unmap": true, 00:11:09.692 "flush": true, 00:11:09.692 "reset": true, 00:11:09.692 "nvme_admin": false, 00:11:09.692 "nvme_io": false, 00:11:09.692 "nvme_io_md": false, 00:11:09.692 "write_zeroes": true, 00:11:09.692 "zcopy": true, 00:11:09.692 "get_zone_info": false, 00:11:09.692 "zone_management": false, 00:11:09.692 "zone_append": false, 00:11:09.692 "compare": false, 00:11:09.692 "compare_and_write": false, 00:11:09.692 "abort": true, 00:11:09.692 "seek_hole": false, 00:11:09.692 "seek_data": false, 00:11:09.692 "copy": true, 00:11:09.692 "nvme_iov_md": false 00:11:09.692 }, 00:11:09.692 "memory_domains": [ 00:11:09.692 { 00:11:09.692 "dma_device_id": "system", 00:11:09.692 "dma_device_type": 1 00:11:09.692 }, 00:11:09.692 { 00:11:09.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.692 "dma_device_type": 2 00:11:09.692 } 00:11:09.692 ], 00:11:09.692 "driver_specific": {} 00:11:09.692 } 00:11:09.692 ]' 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:09.692 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:09.693 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:11.604 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:11.604 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:11.604 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:11.604 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:11.604 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:13.561 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:13.561 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:13.561 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.561 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:13.561 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.561 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:13.561 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:13.561 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:13.561 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:13.561 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:13.561 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:13.561 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:13.562 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:13.562 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:13.562 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:13.562 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:13.562 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:13.562 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:14.131 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:15.073 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:15.073 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:15.073 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:15.073 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:15.073 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.073 ************************************ 00:11:15.073 START TEST filesystem_in_capsule_ext4 00:11:15.073 ************************************ 00:11:15.073 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:15.073 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:15.073 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:15.073 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:15.073 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:15.073 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:15.073 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:15.073 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:15.073 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:15.073 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:15.073 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:15.073 mke2fs 1.47.0 (5-Feb-2023) 00:11:15.332 Discarding device blocks: 0/522240 done 00:11:15.332 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:15.332 Filesystem UUID: 912867f4-8387-456c-b3aa-bb4523faa526 00:11:15.332 Superblock backups stored on blocks: 00:11:15.332 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:15.332 00:11:15.332 Allocating group tables: 0/64 done 00:11:15.332 Writing inode tables: 0/64 done 00:11:16.273 Creating journal (8192 blocks): done 00:11:17.657 Writing superblocks and filesystem accounting information: 0/64 done 00:11:17.657 00:11:17.657 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:17.657 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:22.943 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:22.943 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:22.943 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:22.943 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:22.943 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:22.943 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:22.943 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1528061 00:11:22.943 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:22.943 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:22.943 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:22.943 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:22.943 00:11:22.943 real 0m7.854s 00:11:22.943 user 0m0.033s 00:11:22.943 sys 0m0.072s 00:11:22.943 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.943 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:22.943 ************************************ 00:11:22.943 END TEST filesystem_in_capsule_ext4 00:11:22.943 ************************************ 00:11:23.203 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:23.203 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:23.203 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.203 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.203 ************************************ 00:11:23.203 START TEST filesystem_in_capsule_btrfs 00:11:23.203 ************************************ 00:11:23.203 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:23.203 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:23.203 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:23.203 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:23.203 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:23.203 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:23.203 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:23.203 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:23.203 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:23.203 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:23.204 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:23.464 btrfs-progs v6.8.1 00:11:23.464 See https://btrfs.readthedocs.io for more information. 00:11:23.464 00:11:23.464 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:23.464 NOTE: several default settings have changed in version 5.15, please make sure 00:11:23.464 this does not affect your deployments: 00:11:23.464 - DUP for metadata (-m dup) 00:11:23.464 - enabled no-holes (-O no-holes) 00:11:23.464 - enabled free-space-tree (-R free-space-tree) 00:11:23.464 00:11:23.464 Label: (null) 00:11:23.464 UUID: a8739a86-9c41-4bcf-92bc-c4b97fcafb77 00:11:23.464 Node size: 16384 00:11:23.464 Sector size: 4096 (CPU page size: 4096) 00:11:23.464 Filesystem size: 510.00MiB 00:11:23.464 Block group profiles: 00:11:23.464 Data: single 8.00MiB 00:11:23.464 Metadata: DUP 32.00MiB 00:11:23.464 System: DUP 8.00MiB 00:11:23.464 SSD detected: yes 00:11:23.464 Zoned device: no 00:11:23.464 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:23.464 Checksum: crc32c 00:11:23.464 Number of devices: 1 00:11:23.464 Devices: 00:11:23.464 ID SIZE PATH 00:11:23.464 1 510.00MiB /dev/nvme0n1p1 00:11:23.464 00:11:23.465 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:23.465 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:23.725 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:23.725 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:23.725 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:23.725 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:23.725 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:23.725 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1528061 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:23.985 00:11:23.985 real 0m0.775s 00:11:23.985 user 0m0.035s 00:11:23.985 sys 0m0.111s 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:23.985 ************************************ 00:11:23.985 END TEST filesystem_in_capsule_btrfs 00:11:23.985 ************************************ 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.985 ************************************ 00:11:23.985 START TEST filesystem_in_capsule_xfs 00:11:23.985 ************************************ 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:23.985 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:23.985 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:23.985 = sectsz=512 attr=2, projid32bit=1 00:11:23.985 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:23.985 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:23.985 data = bsize=4096 blocks=130560, imaxpct=25 00:11:23.985 = sunit=0 swidth=0 blks 00:11:23.985 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:23.985 log =internal log bsize=4096 blocks=16384, version=2 00:11:23.985 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:23.985 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:24.926 Discarding blocks...Done. 00:11:24.926 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:24.926 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:27.472 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:27.472 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:27.472 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:27.472 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:27.472 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:27.472 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:27.472 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1528061 00:11:27.472 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:27.472 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:27.472 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:27.472 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:27.732 00:11:27.733 real 0m3.631s 00:11:27.733 user 0m0.021s 00:11:27.733 sys 0m0.086s 00:11:27.733 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.733 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:27.733 ************************************ 00:11:27.733 END TEST filesystem_in_capsule_xfs 00:11:27.733 ************************************ 00:11:27.733 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:27.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1528061 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1528061 ']' 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1528061 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:27.993 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1528061 00:11:28.253 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:28.253 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:28.253 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1528061' 00:11:28.253 killing process with pid 1528061 00:11:28.253 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1528061 00:11:28.253 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1528061 00:11:28.253 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:28.254 00:11:28.254 real 0m19.777s 00:11:28.254 user 1m18.293s 00:11:28.254 sys 0m1.432s 00:11:28.254 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.254 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.254 ************************************ 00:11:28.254 END TEST nvmf_filesystem_in_capsule 00:11:28.254 ************************************ 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:28.514 rmmod nvme_tcp 00:11:28.514 rmmod nvme_fabrics 00:11:28.514 rmmod nvme_keyring 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.514 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.060 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:31.060 00:11:31.060 real 0m49.423s 00:11:31.060 user 2m37.213s 00:11:31.060 sys 0m8.770s 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.060 ************************************ 00:11:31.060 END TEST nvmf_filesystem 00:11:31.060 ************************************ 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:31.060 ************************************ 00:11:31.060 START TEST nvmf_target_discovery 00:11:31.060 ************************************ 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:31.060 * Looking for test storage... 00:11:31.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:31.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.060 --rc genhtml_branch_coverage=1 00:11:31.060 --rc genhtml_function_coverage=1 00:11:31.060 --rc genhtml_legend=1 00:11:31.060 --rc geninfo_all_blocks=1 00:11:31.060 --rc geninfo_unexecuted_blocks=1 00:11:31.060 00:11:31.060 ' 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:31.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.060 --rc genhtml_branch_coverage=1 00:11:31.060 --rc genhtml_function_coverage=1 00:11:31.060 --rc genhtml_legend=1 00:11:31.060 --rc geninfo_all_blocks=1 00:11:31.060 --rc geninfo_unexecuted_blocks=1 00:11:31.060 00:11:31.060 ' 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:31.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.060 --rc genhtml_branch_coverage=1 00:11:31.060 --rc genhtml_function_coverage=1 00:11:31.060 --rc genhtml_legend=1 00:11:31.060 --rc geninfo_all_blocks=1 00:11:31.060 --rc geninfo_unexecuted_blocks=1 00:11:31.060 00:11:31.060 ' 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:31.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.060 --rc genhtml_branch_coverage=1 00:11:31.060 --rc genhtml_function_coverage=1 00:11:31.060 --rc genhtml_legend=1 00:11:31.060 --rc geninfo_all_blocks=1 00:11:31.060 --rc geninfo_unexecuted_blocks=1 00:11:31.060 00:11:31.060 ' 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.060 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:31.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:31.061 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:39.200 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:39.200 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:39.200 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:39.200 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:39.200 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:39.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:11:39.201 00:11:39.201 --- 10.0.0.2 ping statistics --- 00:11:39.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.201 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:39.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:11:39.201 00:11:39.201 --- 10.0.0.1 ping statistics --- 00:11:39.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.201 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=1536307 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 1536307 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1536307 ']' 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:39.201 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.201 [2024-11-04 12:16:12.780978] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:11:39.201 [2024-11-04 12:16:12.781042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.201 [2024-11-04 12:16:12.853812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.201 [2024-11-04 12:16:12.896558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.201 [2024-11-04 12:16:12.896596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.201 [2024-11-04 12:16:12.896604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.201 [2024-11-04 12:16:12.896612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.201 [2024-11-04 12:16:12.896619] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.201 [2024-11-04 12:16:12.898222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.201 [2024-11-04 12:16:12.898346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.201 [2024-11-04 12:16:12.898508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.201 [2024-11-04 12:16:12.898509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.201 [2024-11-04 12:16:13.633907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.201 Null1 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.201 [2024-11-04 12:16:13.694220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.201 Null2 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.201 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:39.202 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.202 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.202 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.202 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:39.202 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:39.202 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.202 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.202 Null3 00:11:39.202 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.202 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:39.202 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.202 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.464 Null4 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.464 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:39.727 00:11:39.727 Discovery Log Number of Records 6, Generation counter 6 00:11:39.727 =====Discovery Log Entry 0====== 00:11:39.727 trtype: tcp 00:11:39.727 adrfam: ipv4 00:11:39.727 subtype: current discovery subsystem 00:11:39.727 treq: not required 00:11:39.727 portid: 0 00:11:39.727 trsvcid: 4420 00:11:39.727 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:39.727 traddr: 10.0.0.2 00:11:39.727 eflags: explicit discovery connections, duplicate discovery information 00:11:39.727 sectype: none 00:11:39.727 =====Discovery Log Entry 1====== 00:11:39.727 trtype: tcp 00:11:39.727 adrfam: ipv4 00:11:39.727 subtype: nvme subsystem 00:11:39.727 treq: not required 00:11:39.727 portid: 0 00:11:39.727 trsvcid: 4420 00:11:39.727 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:39.727 traddr: 10.0.0.2 00:11:39.727 eflags: none 00:11:39.727 sectype: none 00:11:39.727 =====Discovery Log Entry 2====== 00:11:39.727 trtype: tcp 00:11:39.727 adrfam: ipv4 00:11:39.727 subtype: nvme subsystem 00:11:39.727 treq: not required 00:11:39.727 portid: 0 00:11:39.727 trsvcid: 4420 00:11:39.727 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:39.727 traddr: 10.0.0.2 00:11:39.727 eflags: none 00:11:39.727 sectype: none 00:11:39.727 =====Discovery Log Entry 3====== 00:11:39.727 trtype: tcp 00:11:39.727 adrfam: ipv4 00:11:39.727 subtype: nvme subsystem 00:11:39.727 treq: not required 00:11:39.727 portid: 0 00:11:39.727 trsvcid: 4420 00:11:39.727 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:39.727 traddr: 10.0.0.2 00:11:39.727 eflags: none 00:11:39.727 sectype: none 00:11:39.727 =====Discovery Log Entry 4====== 00:11:39.727 trtype: tcp 00:11:39.727 adrfam: ipv4 00:11:39.727 subtype: nvme subsystem 00:11:39.727 treq: not required 00:11:39.727 portid: 0 00:11:39.727 trsvcid: 4420 00:11:39.727 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:39.727 traddr: 10.0.0.2 00:11:39.727 eflags: none 00:11:39.727 sectype: none 00:11:39.727 =====Discovery Log Entry 5====== 00:11:39.727 trtype: tcp 00:11:39.727 adrfam: ipv4 00:11:39.727 subtype: discovery subsystem referral 00:11:39.727 treq: not required 00:11:39.727 portid: 0 00:11:39.727 trsvcid: 4430 00:11:39.727 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:39.727 traddr: 10.0.0.2 00:11:39.727 eflags: none 00:11:39.727 sectype: none 00:11:39.727 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:39.727 Perform nvmf subsystem discovery via RPC 00:11:39.727 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:39.727 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.727 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.727 [ 00:11:39.727 { 00:11:39.727 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:39.727 "subtype": "Discovery", 00:11:39.727 "listen_addresses": [ 00:11:39.727 { 00:11:39.727 "trtype": "TCP", 00:11:39.727 "adrfam": "IPv4", 00:11:39.727 "traddr": "10.0.0.2", 00:11:39.727 "trsvcid": "4420" 00:11:39.727 } 00:11:39.727 ], 00:11:39.727 "allow_any_host": true, 00:11:39.727 "hosts": [] 00:11:39.727 }, 00:11:39.727 { 00:11:39.727 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.727 "subtype": "NVMe", 00:11:39.727 "listen_addresses": [ 00:11:39.727 { 00:11:39.727 "trtype": "TCP", 00:11:39.727 "adrfam": "IPv4", 00:11:39.727 "traddr": "10.0.0.2", 00:11:39.727 "trsvcid": "4420" 00:11:39.727 } 00:11:39.727 ], 00:11:39.727 "allow_any_host": true, 00:11:39.727 "hosts": [], 00:11:39.727 "serial_number": "SPDK00000000000001", 00:11:39.727 "model_number": "SPDK bdev Controller", 00:11:39.727 "max_namespaces": 32, 00:11:39.727 "min_cntlid": 1, 00:11:39.727 "max_cntlid": 65519, 00:11:39.727 "namespaces": [ 00:11:39.727 { 00:11:39.727 "nsid": 1, 00:11:39.727 "bdev_name": "Null1", 00:11:39.727 "name": "Null1", 00:11:39.727 "nguid": "A27F53694F9A4ACEB49DDCB196125DF8", 00:11:39.727 "uuid": "a27f5369-4f9a-4ace-b49d-dcb196125df8" 00:11:39.727 } 00:11:39.727 ] 00:11:39.727 }, 00:11:39.727 { 00:11:39.727 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:39.727 "subtype": "NVMe", 00:11:39.727 "listen_addresses": [ 00:11:39.727 { 00:11:39.727 "trtype": "TCP", 00:11:39.727 "adrfam": "IPv4", 00:11:39.727 "traddr": "10.0.0.2", 00:11:39.727 "trsvcid": "4420" 00:11:39.727 } 00:11:39.727 ], 00:11:39.727 "allow_any_host": true, 00:11:39.727 "hosts": [], 00:11:39.727 "serial_number": "SPDK00000000000002", 00:11:39.727 "model_number": "SPDK bdev Controller", 00:11:39.727 "max_namespaces": 32, 00:11:39.727 "min_cntlid": 1, 00:11:39.727 "max_cntlid": 65519, 00:11:39.727 "namespaces": [ 00:11:39.727 { 00:11:39.727 "nsid": 1, 00:11:39.727 "bdev_name": "Null2", 00:11:39.727 "name": "Null2", 00:11:39.727 "nguid": "C9CAA23D6B874E9BB965898595F51A5D", 00:11:39.727 "uuid": "c9caa23d-6b87-4e9b-b965-898595f51a5d" 00:11:39.727 } 00:11:39.727 ] 00:11:39.727 }, 00:11:39.727 { 00:11:39.727 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:39.727 "subtype": "NVMe", 00:11:39.727 "listen_addresses": [ 00:11:39.727 { 00:11:39.727 "trtype": "TCP", 00:11:39.727 "adrfam": "IPv4", 00:11:39.727 "traddr": "10.0.0.2", 00:11:39.727 "trsvcid": "4420" 00:11:39.727 } 00:11:39.727 ], 00:11:39.727 "allow_any_host": true, 00:11:39.727 "hosts": [], 00:11:39.727 "serial_number": "SPDK00000000000003", 00:11:39.727 "model_number": "SPDK bdev Controller", 00:11:39.727 "max_namespaces": 32, 00:11:39.727 "min_cntlid": 1, 00:11:39.727 "max_cntlid": 65519, 00:11:39.727 "namespaces": [ 00:11:39.727 { 00:11:39.727 "nsid": 1, 00:11:39.727 "bdev_name": "Null3", 00:11:39.727 "name": "Null3", 00:11:39.727 "nguid": "056F6A59D81749C89E3EB4E13BB33932", 00:11:39.727 "uuid": "056f6a59-d817-49c8-9e3e-b4e13bb33932" 00:11:39.727 } 00:11:39.727 ] 00:11:39.727 }, 00:11:39.727 { 00:11:39.727 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:39.727 "subtype": "NVMe", 00:11:39.727 "listen_addresses": [ 00:11:39.727 { 00:11:39.727 "trtype": "TCP", 00:11:39.727 "adrfam": "IPv4", 00:11:39.727 "traddr": "10.0.0.2", 00:11:39.727 "trsvcid": "4420" 00:11:39.727 } 00:11:39.727 ], 00:11:39.727 "allow_any_host": true, 00:11:39.727 "hosts": [], 00:11:39.727 "serial_number": "SPDK00000000000004", 00:11:39.727 "model_number": "SPDK bdev Controller", 00:11:39.727 "max_namespaces": 32, 00:11:39.727 "min_cntlid": 1, 00:11:39.727 "max_cntlid": 65519, 00:11:39.727 "namespaces": [ 00:11:39.727 { 00:11:39.727 "nsid": 1, 00:11:39.727 "bdev_name": "Null4", 00:11:39.727 "name": "Null4", 00:11:39.728 "nguid": "80BC39653B3F472DB5EB63448C0AFCFD", 00:11:39.728 "uuid": "80bc3965-3b3f-472d-b5eb-63448c0afcfd" 00:11:39.728 } 00:11:39.728 ] 00:11:39.728 } 00:11:39.728 ] 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.728 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:39.728 rmmod nvme_tcp 00:11:39.728 rmmod nvme_fabrics 00:11:39.991 rmmod nvme_keyring 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 1536307 ']' 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 1536307 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1536307 ']' 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1536307 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1536307 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1536307' 00:11:39.991 killing process with pid 1536307 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1536307 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1536307 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.991 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:42.536 00:11:42.536 real 0m11.517s 00:11:42.536 user 0m8.782s 00:11:42.536 sys 0m6.016s 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.536 ************************************ 00:11:42.536 END TEST nvmf_target_discovery 00:11:42.536 ************************************ 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.536 ************************************ 00:11:42.536 START TEST nvmf_referrals 00:11:42.536 ************************************ 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:42.536 * Looking for test storage... 00:11:42.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:42.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.536 --rc genhtml_branch_coverage=1 00:11:42.536 --rc genhtml_function_coverage=1 00:11:42.536 --rc genhtml_legend=1 00:11:42.536 --rc geninfo_all_blocks=1 00:11:42.536 --rc geninfo_unexecuted_blocks=1 00:11:42.536 00:11:42.536 ' 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:42.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.536 --rc genhtml_branch_coverage=1 00:11:42.536 --rc genhtml_function_coverage=1 00:11:42.536 --rc genhtml_legend=1 00:11:42.536 --rc geninfo_all_blocks=1 00:11:42.536 --rc geninfo_unexecuted_blocks=1 00:11:42.536 00:11:42.536 ' 00:11:42.536 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:42.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.537 --rc genhtml_branch_coverage=1 00:11:42.537 --rc genhtml_function_coverage=1 00:11:42.537 --rc genhtml_legend=1 00:11:42.537 --rc geninfo_all_blocks=1 00:11:42.537 --rc geninfo_unexecuted_blocks=1 00:11:42.537 00:11:42.537 ' 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:42.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.537 --rc genhtml_branch_coverage=1 00:11:42.537 --rc genhtml_function_coverage=1 00:11:42.537 --rc genhtml_legend=1 00:11:42.537 --rc geninfo_all_blocks=1 00:11:42.537 --rc geninfo_unexecuted_blocks=1 00:11:42.537 00:11:42.537 ' 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:42.537 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.675 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:50.676 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:50.676 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:50.676 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:50.676 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.676 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:50.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:11:50.676 00:11:50.676 --- 10.0.0.2 ping statistics --- 00:11:50.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.676 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:11:50.676 00:11:50.676 --- 10.0.0.1 ping statistics --- 00:11:50.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.676 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=1540996 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 1540996 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1540996 ']' 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.676 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:50.677 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.677 [2024-11-04 12:16:24.248049] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:11:50.677 [2024-11-04 12:16:24.248120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.677 [2024-11-04 12:16:24.319535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.677 [2024-11-04 12:16:24.362610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.677 [2024-11-04 12:16:24.362649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.677 [2024-11-04 12:16:24.362657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.677 [2024-11-04 12:16:24.362664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.677 [2024-11-04 12:16:24.362670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.677 [2024-11-04 12:16:24.364277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.677 [2024-11-04 12:16:24.364394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.677 [2024-11-04 12:16:24.364551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.677 [2024-11-04 12:16:24.364552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.677 [2024-11-04 12:16:25.095714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.677 [2024-11-04 12:16:25.111922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:50.677 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.938 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:50.938 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:50.938 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:50.938 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:50.938 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:50.938 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:50.938 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:50.938 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:50.938 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:50.938 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:50.938 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:50.938 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.938 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.938 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.938 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:50.938 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.938 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.199 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:51.460 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:51.720 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:51.720 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:51.720 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:51.720 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:51.720 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:51.720 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.720 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:51.720 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:51.720 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:51.720 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:51.720 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:51.980 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.981 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:51.981 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:51.981 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:51.981 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.981 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.981 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.981 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:51.981 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:51.981 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:51.981 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:51.981 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.981 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:51.981 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.241 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.241 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:52.241 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:52.241 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:52.241 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:52.241 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:52.241 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:52.241 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:52.241 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:52.241 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:52.241 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:52.241 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:52.241 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:52.241 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:52.241 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:52.241 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:52.502 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:52.502 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:52.502 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:52.502 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:52.502 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:52.502 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:52.763 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:52.763 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:52.763 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.763 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.763 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.763 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:52.763 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:52.763 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.763 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.763 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.763 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:52.763 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:52.763 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:52.763 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:52.763 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:52.763 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:52.763 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:53.023 rmmod nvme_tcp 00:11:53.023 rmmod nvme_fabrics 00:11:53.023 rmmod nvme_keyring 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 1540996 ']' 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 1540996 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1540996 ']' 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1540996 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1540996 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1540996' 00:11:53.023 killing process with pid 1540996 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1540996 00:11:53.023 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1540996 00:11:53.283 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:53.283 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:53.283 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:53.283 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:53.283 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:11:53.283 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:53.283 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:11:53.283 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:53.283 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:53.283 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.283 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.283 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.193 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:55.193 00:11:55.193 real 0m13.075s 00:11:55.193 user 0m16.326s 00:11:55.193 sys 0m6.253s 00:11:55.193 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:55.193 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.193 ************************************ 00:11:55.193 END TEST nvmf_referrals 00:11:55.193 ************************************ 00:11:55.453 12:16:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:55.454 12:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:55.454 12:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.454 12:16:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:55.454 ************************************ 00:11:55.454 START TEST nvmf_connect_disconnect 00:11:55.454 ************************************ 00:11:55.454 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:55.454 * Looking for test storage... 00:11:55.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.454 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:55.454 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:55.454 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:55.454 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:55.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.715 --rc genhtml_branch_coverage=1 00:11:55.715 --rc genhtml_function_coverage=1 00:11:55.715 --rc genhtml_legend=1 00:11:55.715 --rc geninfo_all_blocks=1 00:11:55.715 --rc geninfo_unexecuted_blocks=1 00:11:55.715 00:11:55.715 ' 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:55.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.715 --rc genhtml_branch_coverage=1 00:11:55.715 --rc genhtml_function_coverage=1 00:11:55.715 --rc genhtml_legend=1 00:11:55.715 --rc geninfo_all_blocks=1 00:11:55.715 --rc geninfo_unexecuted_blocks=1 00:11:55.715 00:11:55.715 ' 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:55.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.715 --rc genhtml_branch_coverage=1 00:11:55.715 --rc genhtml_function_coverage=1 00:11:55.715 --rc genhtml_legend=1 00:11:55.715 --rc geninfo_all_blocks=1 00:11:55.715 --rc geninfo_unexecuted_blocks=1 00:11:55.715 00:11:55.715 ' 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:55.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.715 --rc genhtml_branch_coverage=1 00:11:55.715 --rc genhtml_function_coverage=1 00:11:55.715 --rc genhtml_legend=1 00:11:55.715 --rc geninfo_all_blocks=1 00:11:55.715 --rc geninfo_unexecuted_blocks=1 00:11:55.715 00:11:55.715 ' 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.715 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:55.716 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:02.312 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:02.313 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:02.313 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.313 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:02.314 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:02.314 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.314 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:02.315 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:02.315 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:02.315 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:02.315 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:02.315 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:02.315 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:02.315 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.315 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:02.315 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:02.315 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:02.315 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:02.587 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:02.587 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:02.587 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:02.587 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:02.587 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:02.587 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:02.587 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:02.587 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:02.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:12:02.587 00:12:02.587 --- 10.0.0.2 ping statistics --- 00:12:02.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.587 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:12:02.587 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:02.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:12:02.588 00:12:02.588 --- 10.0.0.1 ping statistics --- 00:12:02.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.588 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=1545785 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 1545785 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1545785 ']' 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:02.588 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:02.849 [2024-11-04 12:16:37.213677] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:12:02.849 [2024-11-04 12:16:37.213788] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.849 [2024-11-04 12:16:37.287571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.849 [2024-11-04 12:16:37.332098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.849 [2024-11-04 12:16:37.332139] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.849 [2024-11-04 12:16:37.332148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.849 [2024-11-04 12:16:37.332155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.849 [2024-11-04 12:16:37.332161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.849 [2024-11-04 12:16:37.333783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.849 [2024-11-04 12:16:37.333999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.849 [2024-11-04 12:16:37.333999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.849 [2024-11-04 12:16:37.333865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.798 [2024-11-04 12:16:38.061127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.798 [2024-11-04 12:16:38.131048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:03.798 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:07.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:22.216 rmmod nvme_tcp 00:12:22.216 rmmod nvme_fabrics 00:12:22.216 rmmod nvme_keyring 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 1545785 ']' 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 1545785 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1545785 ']' 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1545785 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1545785 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1545785' 00:12:22.216 killing process with pid 1545785 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1545785 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1545785 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.216 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.764 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:24.764 00:12:24.764 real 0m28.928s 00:12:24.764 user 1m18.882s 00:12:24.764 sys 0m6.937s 00:12:24.764 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.764 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:24.764 ************************************ 00:12:24.764 END TEST nvmf_connect_disconnect 00:12:24.764 ************************************ 00:12:24.764 12:16:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:24.764 12:16:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:24.764 12:16:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:24.764 12:16:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.764 ************************************ 00:12:24.764 START TEST nvmf_multitarget 00:12:24.764 ************************************ 00:12:24.764 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:24.764 * Looking for test storage... 00:12:24.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.764 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:24.764 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:24.764 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.764 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:24.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.765 --rc genhtml_branch_coverage=1 00:12:24.765 --rc genhtml_function_coverage=1 00:12:24.765 --rc genhtml_legend=1 00:12:24.765 --rc geninfo_all_blocks=1 00:12:24.765 --rc geninfo_unexecuted_blocks=1 00:12:24.765 00:12:24.765 ' 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:24.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.765 --rc genhtml_branch_coverage=1 00:12:24.765 --rc genhtml_function_coverage=1 00:12:24.765 --rc genhtml_legend=1 00:12:24.765 --rc geninfo_all_blocks=1 00:12:24.765 --rc geninfo_unexecuted_blocks=1 00:12:24.765 00:12:24.765 ' 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:24.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.765 --rc genhtml_branch_coverage=1 00:12:24.765 --rc genhtml_function_coverage=1 00:12:24.765 --rc genhtml_legend=1 00:12:24.765 --rc geninfo_all_blocks=1 00:12:24.765 --rc geninfo_unexecuted_blocks=1 00:12:24.765 00:12:24.765 ' 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:24.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.765 --rc genhtml_branch_coverage=1 00:12:24.765 --rc genhtml_function_coverage=1 00:12:24.765 --rc genhtml_legend=1 00:12:24.765 --rc geninfo_all_blocks=1 00:12:24.765 --rc geninfo_unexecuted_blocks=1 00:12:24.765 00:12:24.765 ' 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:24.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:24.765 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:31.356 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.356 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:31.356 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:31.356 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:31.356 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:31.356 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:31.356 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:31.356 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:31.357 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:31.357 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:31.357 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:31.618 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:31.618 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.618 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.618 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.618 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.618 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:31.618 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.618 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.618 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.619 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:31.619 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:31.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:12:31.619 00:12:31.619 --- 10.0.0.2 ping statistics --- 00:12:31.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.619 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:12:31.619 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:12:31.880 00:12:31.880 --- 10.0.0.1 ping statistics --- 00:12:31.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.880 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=1553901 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 1553901 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1553901 ']' 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:31.880 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:31.880 [2024-11-04 12:17:06.292150] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:12:31.880 [2024-11-04 12:17:06.292218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.880 [2024-11-04 12:17:06.363767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.880 [2024-11-04 12:17:06.406901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.880 [2024-11-04 12:17:06.406942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.880 [2024-11-04 12:17:06.406951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.880 [2024-11-04 12:17:06.406958] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.880 [2024-11-04 12:17:06.406965] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.880 [2024-11-04 12:17:06.408725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.880 [2024-11-04 12:17:06.408833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.880 [2024-11-04 12:17:06.409151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.880 [2024-11-04 12:17:06.409153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.823 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:32.823 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:32.823 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:32.823 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:32.823 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:32.823 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.823 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:32.823 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:32.823 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:32.823 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:32.823 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:32.823 "nvmf_tgt_1" 00:12:32.823 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:33.084 "nvmf_tgt_2" 00:12:33.084 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:33.084 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:33.084 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:33.084 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:33.084 true 00:12:33.345 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:33.346 true 00:12:33.346 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:33.346 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:33.346 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:33.346 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:33.346 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:33.346 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:33.346 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:33.346 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:33.346 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:33.346 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:33.346 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:33.346 rmmod nvme_tcp 00:12:33.346 rmmod nvme_fabrics 00:12:33.346 rmmod nvme_keyring 00:12:33.607 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:33.607 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:33.607 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:33.607 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 1553901 ']' 00:12:33.607 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 1553901 00:12:33.607 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1553901 ']' 00:12:33.607 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1553901 00:12:33.607 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:33.607 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:33.607 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1553901 00:12:33.607 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:33.607 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:33.607 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1553901' 00:12:33.607 killing process with pid 1553901 00:12:33.607 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1553901 00:12:33.607 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1553901 00:12:33.607 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:33.607 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:33.607 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:33.607 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:33.607 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:12:33.607 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:12:33.607 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:33.607 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:33.607 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:33.607 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.607 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.607 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:36.166 00:12:36.166 real 0m11.356s 00:12:36.166 user 0m9.749s 00:12:36.166 sys 0m5.813s 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:36.166 ************************************ 00:12:36.166 END TEST nvmf_multitarget 00:12:36.166 ************************************ 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:36.166 ************************************ 00:12:36.166 START TEST nvmf_rpc 00:12:36.166 ************************************ 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:36.166 * Looking for test storage... 00:12:36.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:36.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.166 --rc genhtml_branch_coverage=1 00:12:36.166 --rc genhtml_function_coverage=1 00:12:36.166 --rc genhtml_legend=1 00:12:36.166 --rc geninfo_all_blocks=1 00:12:36.166 --rc geninfo_unexecuted_blocks=1 00:12:36.166 00:12:36.166 ' 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:36.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.166 --rc genhtml_branch_coverage=1 00:12:36.166 --rc genhtml_function_coverage=1 00:12:36.166 --rc genhtml_legend=1 00:12:36.166 --rc geninfo_all_blocks=1 00:12:36.166 --rc geninfo_unexecuted_blocks=1 00:12:36.166 00:12:36.166 ' 00:12:36.166 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:36.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.166 --rc genhtml_branch_coverage=1 00:12:36.166 --rc genhtml_function_coverage=1 00:12:36.167 --rc genhtml_legend=1 00:12:36.167 --rc geninfo_all_blocks=1 00:12:36.167 --rc geninfo_unexecuted_blocks=1 00:12:36.167 00:12:36.167 ' 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:36.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.167 --rc genhtml_branch_coverage=1 00:12:36.167 --rc genhtml_function_coverage=1 00:12:36.167 --rc genhtml_legend=1 00:12:36.167 --rc geninfo_all_blocks=1 00:12:36.167 --rc geninfo_unexecuted_blocks=1 00:12:36.167 00:12:36.167 ' 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:36.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:36.167 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.311 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.311 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:44.311 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:44.311 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:44.311 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:44.311 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:44.311 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:44.311 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:44.311 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:44.311 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:44.311 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:44.311 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:44.311 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:44.311 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:44.311 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:44.311 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.311 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.311 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.311 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:44.312 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:44.312 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:44.312 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:44.312 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:44.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:12:44.312 00:12:44.312 --- 10.0.0.2 ping statistics --- 00:12:44.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.312 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:12:44.312 00:12:44.312 --- 10.0.0.1 ping statistics --- 00:12:44.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.312 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=1558498 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 1558498 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1558498 ']' 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:44.312 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.313 [2024-11-04 12:17:17.805688] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:12:44.313 [2024-11-04 12:17:17.805764] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.313 [2024-11-04 12:17:17.881508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.313 [2024-11-04 12:17:17.925351] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.313 [2024-11-04 12:17:17.925393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.313 [2024-11-04 12:17:17.925401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.313 [2024-11-04 12:17:17.925408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.313 [2024-11-04 12:17:17.925414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.313 [2024-11-04 12:17:17.927259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.313 [2024-11-04 12:17:17.927380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.313 [2024-11-04 12:17:17.927541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.313 [2024-11-04 12:17:17.927542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:44.313 "tick_rate": 2400000000, 00:12:44.313 "poll_groups": [ 00:12:44.313 { 00:12:44.313 "name": "nvmf_tgt_poll_group_000", 00:12:44.313 "admin_qpairs": 0, 00:12:44.313 "io_qpairs": 0, 00:12:44.313 "current_admin_qpairs": 0, 00:12:44.313 "current_io_qpairs": 0, 00:12:44.313 "pending_bdev_io": 0, 00:12:44.313 "completed_nvme_io": 0, 00:12:44.313 "transports": [] 00:12:44.313 }, 00:12:44.313 { 00:12:44.313 "name": "nvmf_tgt_poll_group_001", 00:12:44.313 "admin_qpairs": 0, 00:12:44.313 "io_qpairs": 0, 00:12:44.313 "current_admin_qpairs": 0, 00:12:44.313 "current_io_qpairs": 0, 00:12:44.313 "pending_bdev_io": 0, 00:12:44.313 "completed_nvme_io": 0, 00:12:44.313 "transports": [] 00:12:44.313 }, 00:12:44.313 { 00:12:44.313 "name": "nvmf_tgt_poll_group_002", 00:12:44.313 "admin_qpairs": 0, 00:12:44.313 "io_qpairs": 0, 00:12:44.313 "current_admin_qpairs": 0, 00:12:44.313 "current_io_qpairs": 0, 00:12:44.313 "pending_bdev_io": 0, 00:12:44.313 "completed_nvme_io": 0, 00:12:44.313 "transports": [] 00:12:44.313 }, 00:12:44.313 { 00:12:44.313 "name": "nvmf_tgt_poll_group_003", 00:12:44.313 "admin_qpairs": 0, 00:12:44.313 "io_qpairs": 0, 00:12:44.313 "current_admin_qpairs": 0, 00:12:44.313 "current_io_qpairs": 0, 00:12:44.313 "pending_bdev_io": 0, 00:12:44.313 "completed_nvme_io": 0, 00:12:44.313 "transports": [] 00:12:44.313 } 00:12:44.313 ] 00:12:44.313 }' 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.313 [2024-11-04 12:17:18.779244] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:44.313 "tick_rate": 2400000000, 00:12:44.313 "poll_groups": [ 00:12:44.313 { 00:12:44.313 "name": "nvmf_tgt_poll_group_000", 00:12:44.313 "admin_qpairs": 0, 00:12:44.313 "io_qpairs": 0, 00:12:44.313 "current_admin_qpairs": 0, 00:12:44.313 "current_io_qpairs": 0, 00:12:44.313 "pending_bdev_io": 0, 00:12:44.313 "completed_nvme_io": 0, 00:12:44.313 "transports": [ 00:12:44.313 { 00:12:44.313 "trtype": "TCP" 00:12:44.313 } 00:12:44.313 ] 00:12:44.313 }, 00:12:44.313 { 00:12:44.313 "name": "nvmf_tgt_poll_group_001", 00:12:44.313 "admin_qpairs": 0, 00:12:44.313 "io_qpairs": 0, 00:12:44.313 "current_admin_qpairs": 0, 00:12:44.313 "current_io_qpairs": 0, 00:12:44.313 "pending_bdev_io": 0, 00:12:44.313 "completed_nvme_io": 0, 00:12:44.313 "transports": [ 00:12:44.313 { 00:12:44.313 "trtype": "TCP" 00:12:44.313 } 00:12:44.313 ] 00:12:44.313 }, 00:12:44.313 { 00:12:44.313 "name": "nvmf_tgt_poll_group_002", 00:12:44.313 "admin_qpairs": 0, 00:12:44.313 "io_qpairs": 0, 00:12:44.313 "current_admin_qpairs": 0, 00:12:44.313 "current_io_qpairs": 0, 00:12:44.313 "pending_bdev_io": 0, 00:12:44.313 "completed_nvme_io": 0, 00:12:44.313 "transports": [ 00:12:44.313 { 00:12:44.313 "trtype": "TCP" 00:12:44.313 } 00:12:44.313 ] 00:12:44.313 }, 00:12:44.313 { 00:12:44.313 "name": "nvmf_tgt_poll_group_003", 00:12:44.313 "admin_qpairs": 0, 00:12:44.313 "io_qpairs": 0, 00:12:44.313 "current_admin_qpairs": 0, 00:12:44.313 "current_io_qpairs": 0, 00:12:44.313 "pending_bdev_io": 0, 00:12:44.313 "completed_nvme_io": 0, 00:12:44.313 "transports": [ 00:12:44.313 { 00:12:44.313 "trtype": "TCP" 00:12:44.313 } 00:12:44.313 ] 00:12:44.313 } 00:12:44.313 ] 00:12:44.313 }' 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:44.313 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.575 Malloc1 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.575 [2024-11-04 12:17:18.984608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:44.575 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:44.575 [2024-11-04 12:17:19.021594] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:44.575 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:44.575 could not add new controller: failed to write to nvme-fabrics device 00:12:44.575 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:44.575 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:44.575 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:44.575 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:44.575 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:44.575 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.575 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.575 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.575 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.490 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.490 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:46.490 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.490 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:46.490 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.405 [2024-11-04 12:17:22.757677] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:48.405 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:48.405 could not add new controller: failed to write to nvme-fabrics device 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.405 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.790 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.790 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:49.790 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.790 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:49.790 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.337 [2024-11-04 12:17:26.489051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.337 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.768 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.768 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:53.768 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.768 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:53.768 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:55.714 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:55.714 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:55.714 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.715 [2024-11-04 12:17:30.206206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.715 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.629 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:57.629 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:57.629 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:57.629 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:57.629 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.585 [2024-11-04 12:17:33.964722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.585 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:00.970 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.970 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:00.970 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.970 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:00.970 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.513 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.514 [2024-11-04 12:17:37.694419] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.514 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.899 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.899 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:04.900 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.900 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:04.900 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:06.822 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:06.822 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:06.823 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.823 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:06.823 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.823 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:06.823 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.823 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.823 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:06.823 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:06.823 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.823 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:06.823 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.823 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:06.823 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.823 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.823 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.823 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.823 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.823 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.823 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.083 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.083 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:07.083 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.083 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.083 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.083 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.083 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.083 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.083 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.083 [2024-11-04 12:17:41.414840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.083 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.083 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:07.083 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.083 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.083 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.083 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.083 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.083 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.083 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.083 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.463 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.463 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:08.463 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.463 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:08.463 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:10.371 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:10.371 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:10.371 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.632 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:10.632 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.632 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:10.632 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.632 [2024-11-04 12:17:45.143014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.632 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.892 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.892 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.892 [2024-11-04 12:17:45.207077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 [2024-11-04 12:17:45.275297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 [2024-11-04 12:17:45.347492] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 [2024-11-04 12:17:45.411703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.154 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:11.154 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.154 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.154 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.154 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:11.154 "tick_rate": 2400000000, 00:13:11.154 "poll_groups": [ 00:13:11.154 { 00:13:11.154 "name": "nvmf_tgt_poll_group_000", 00:13:11.154 "admin_qpairs": 0, 00:13:11.155 "io_qpairs": 224, 00:13:11.155 "current_admin_qpairs": 0, 00:13:11.155 "current_io_qpairs": 0, 00:13:11.155 "pending_bdev_io": 0, 00:13:11.155 "completed_nvme_io": 415, 00:13:11.155 "transports": [ 00:13:11.155 { 00:13:11.155 "trtype": "TCP" 00:13:11.155 } 00:13:11.155 ] 00:13:11.155 }, 00:13:11.155 { 00:13:11.155 "name": "nvmf_tgt_poll_group_001", 00:13:11.155 "admin_qpairs": 1, 00:13:11.155 "io_qpairs": 223, 00:13:11.155 "current_admin_qpairs": 0, 00:13:11.155 "current_io_qpairs": 0, 00:13:11.155 "pending_bdev_io": 0, 00:13:11.155 "completed_nvme_io": 382, 00:13:11.155 "transports": [ 00:13:11.155 { 00:13:11.155 "trtype": "TCP" 00:13:11.155 } 00:13:11.155 ] 00:13:11.155 }, 00:13:11.155 { 00:13:11.155 "name": "nvmf_tgt_poll_group_002", 00:13:11.155 "admin_qpairs": 6, 00:13:11.155 "io_qpairs": 218, 00:13:11.155 "current_admin_qpairs": 0, 00:13:11.155 "current_io_qpairs": 0, 00:13:11.155 "pending_bdev_io": 0, 00:13:11.155 "completed_nvme_io": 218, 00:13:11.155 "transports": [ 00:13:11.155 { 00:13:11.155 "trtype": "TCP" 00:13:11.155 } 00:13:11.155 ] 00:13:11.155 }, 00:13:11.155 { 00:13:11.155 "name": "nvmf_tgt_poll_group_003", 00:13:11.155 "admin_qpairs": 0, 00:13:11.155 "io_qpairs": 224, 00:13:11.155 "current_admin_qpairs": 0, 00:13:11.155 "current_io_qpairs": 0, 00:13:11.155 "pending_bdev_io": 0, 00:13:11.155 "completed_nvme_io": 224, 00:13:11.155 "transports": [ 00:13:11.155 { 00:13:11.155 "trtype": "TCP" 00:13:11.155 } 00:13:11.155 ] 00:13:11.155 } 00:13:11.155 ] 00:13:11.155 }' 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:11.155 rmmod nvme_tcp 00:13:11.155 rmmod nvme_fabrics 00:13:11.155 rmmod nvme_keyring 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 1558498 ']' 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 1558498 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1558498 ']' 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1558498 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1558498 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1558498' 00:13:11.155 killing process with pid 1558498 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1558498 00:13:11.155 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1558498 00:13:11.416 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:11.416 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:11.416 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:11.416 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:11.416 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:13:11.416 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:11.416 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:13:11.416 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:11.416 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:11.416 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.416 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.416 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.959 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:13.959 00:13:13.959 real 0m37.643s 00:13:13.959 user 1m53.456s 00:13:13.959 sys 0m7.703s 00:13:13.959 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.959 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.959 ************************************ 00:13:13.959 END TEST nvmf_rpc 00:13:13.959 ************************************ 00:13:13.959 12:17:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:13.959 12:17:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:13.959 12:17:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.959 12:17:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:13.959 ************************************ 00:13:13.959 START TEST nvmf_invalid 00:13:13.959 ************************************ 00:13:13.959 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:13.959 * Looking for test storage... 00:13:13.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.959 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:13.959 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:13.959 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:13.959 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:13.959 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.959 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.959 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.959 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.959 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.959 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.959 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.959 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.959 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.959 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.959 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.959 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:13.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.960 --rc genhtml_branch_coverage=1 00:13:13.960 --rc genhtml_function_coverage=1 00:13:13.960 --rc genhtml_legend=1 00:13:13.960 --rc geninfo_all_blocks=1 00:13:13.960 --rc geninfo_unexecuted_blocks=1 00:13:13.960 00:13:13.960 ' 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:13.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.960 --rc genhtml_branch_coverage=1 00:13:13.960 --rc genhtml_function_coverage=1 00:13:13.960 --rc genhtml_legend=1 00:13:13.960 --rc geninfo_all_blocks=1 00:13:13.960 --rc geninfo_unexecuted_blocks=1 00:13:13.960 00:13:13.960 ' 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:13.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.960 --rc genhtml_branch_coverage=1 00:13:13.960 --rc genhtml_function_coverage=1 00:13:13.960 --rc genhtml_legend=1 00:13:13.960 --rc geninfo_all_blocks=1 00:13:13.960 --rc geninfo_unexecuted_blocks=1 00:13:13.960 00:13:13.960 ' 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:13.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.960 --rc genhtml_branch_coverage=1 00:13:13.960 --rc genhtml_function_coverage=1 00:13:13.960 --rc genhtml_legend=1 00:13:13.960 --rc geninfo_all_blocks=1 00:13:13.960 --rc geninfo_unexecuted_blocks=1 00:13:13.960 00:13:13.960 ' 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:13.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:13.960 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:22.099 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:22.100 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:22.100 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:22.100 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:22.100 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:22.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:13:22.100 00:13:22.100 --- 10.0.0.2 ping statistics --- 00:13:22.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.100 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:13:22.100 00:13:22.100 --- 10.0.0.1 ping statistics --- 00:13:22.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.100 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=1568205 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 1568205 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1568205 ']' 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:22.100 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:22.100 [2024-11-04 12:17:55.618758] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:13:22.100 [2024-11-04 12:17:55.618831] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.100 [2024-11-04 12:17:55.690935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:22.100 [2024-11-04 12:17:55.734653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.100 [2024-11-04 12:17:55.734694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.101 [2024-11-04 12:17:55.734703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.101 [2024-11-04 12:17:55.734709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.101 [2024-11-04 12:17:55.734715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.101 [2024-11-04 12:17:55.736399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.101 [2024-11-04 12:17:55.736538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.101 [2024-11-04 12:17:55.736703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.101 [2024-11-04 12:17:55.736703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.101 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:22.101 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:22.101 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:22.101 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:22.101 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:22.101 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.101 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:22.101 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27884 00:13:22.101 [2024-11-04 12:17:56.624660] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:22.101 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:22.101 { 00:13:22.101 "nqn": "nqn.2016-06.io.spdk:cnode27884", 00:13:22.101 "tgt_name": "foobar", 00:13:22.101 "method": "nvmf_create_subsystem", 00:13:22.101 "req_id": 1 00:13:22.101 } 00:13:22.101 Got JSON-RPC error response 00:13:22.101 response: 00:13:22.101 { 00:13:22.101 "code": -32603, 00:13:22.101 "message": "Unable to find target foobar" 00:13:22.101 }' 00:13:22.101 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:22.101 { 00:13:22.101 "nqn": "nqn.2016-06.io.spdk:cnode27884", 00:13:22.101 "tgt_name": "foobar", 00:13:22.101 "method": "nvmf_create_subsystem", 00:13:22.101 "req_id": 1 00:13:22.101 } 00:13:22.101 Got JSON-RPC error response 00:13:22.101 response: 00:13:22.101 { 00:13:22.101 "code": -32603, 00:13:22.101 "message": "Unable to find target foobar" 00:13:22.101 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:22.101 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:22.101 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode20920 00:13:22.361 [2024-11-04 12:17:56.809305] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20920: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:22.361 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:22.361 { 00:13:22.361 "nqn": "nqn.2016-06.io.spdk:cnode20920", 00:13:22.361 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:22.361 "method": "nvmf_create_subsystem", 00:13:22.361 "req_id": 1 00:13:22.361 } 00:13:22.361 Got JSON-RPC error response 00:13:22.361 response: 00:13:22.361 { 00:13:22.361 "code": -32602, 00:13:22.361 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:22.361 }' 00:13:22.361 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:22.361 { 00:13:22.361 "nqn": "nqn.2016-06.io.spdk:cnode20920", 00:13:22.361 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:22.361 "method": "nvmf_create_subsystem", 00:13:22.361 "req_id": 1 00:13:22.361 } 00:13:22.361 Got JSON-RPC error response 00:13:22.361 response: 00:13:22.361 { 00:13:22.361 "code": -32602, 00:13:22.361 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:22.361 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:22.361 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:22.361 12:17:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31260 00:13:22.622 [2024-11-04 12:17:57.001888] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31260: invalid model number 'SPDK_Controller' 00:13:22.622 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:22.622 { 00:13:22.622 "nqn": "nqn.2016-06.io.spdk:cnode31260", 00:13:22.622 "model_number": "SPDK_Controller\u001f", 00:13:22.622 "method": "nvmf_create_subsystem", 00:13:22.622 "req_id": 1 00:13:22.623 } 00:13:22.623 Got JSON-RPC error response 00:13:22.623 response: 00:13:22.623 { 00:13:22.623 "code": -32602, 00:13:22.623 "message": "Invalid MN SPDK_Controller\u001f" 00:13:22.623 }' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:22.623 { 00:13:22.623 "nqn": "nqn.2016-06.io.spdk:cnode31260", 00:13:22.623 "model_number": "SPDK_Controller\u001f", 00:13:22.623 "method": "nvmf_create_subsystem", 00:13:22.623 "req_id": 1 00:13:22.623 } 00:13:22.623 Got JSON-RPC error response 00:13:22.623 response: 00:13:22.623 { 00:13:22.623 "code": -32602, 00:13:22.623 "message": "Invalid MN SPDK_Controller\u001f" 00:13:22.623 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:22.623 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:22.624 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:22.624 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.624 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.624 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:22.624 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:22.884 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:22.884 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.884 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.884 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:22.884 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:22.884 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:22.884 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.884 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.884 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ' == \- ]] 00:13:22.884 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ''\''DLhdK qVq?tRFu$Uw-LT' 00:13:22.884 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ''\''DLhdK qVq?tRFu$Uw-LT' nqn.2016-06.io.spdk:cnode30846 00:13:22.884 [2024-11-04 12:17:57.355002] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30846: invalid serial number ''DLhdK qVq?tRFu$Uw-LT' 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:22.885 { 00:13:22.885 "nqn": "nqn.2016-06.io.spdk:cnode30846", 00:13:22.885 "serial_number": "'\''DLhdK qVq?tRFu$Uw-LT", 00:13:22.885 "method": "nvmf_create_subsystem", 00:13:22.885 "req_id": 1 00:13:22.885 } 00:13:22.885 Got JSON-RPC error response 00:13:22.885 response: 00:13:22.885 { 00:13:22.885 "code": -32602, 00:13:22.885 "message": "Invalid SN '\''DLhdK qVq?tRFu$Uw-LT" 00:13:22.885 }' 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:22.885 { 00:13:22.885 "nqn": "nqn.2016-06.io.spdk:cnode30846", 00:13:22.885 "serial_number": "'DLhdK qVq?tRFu$Uw-LT", 00:13:22.885 "method": "nvmf_create_subsystem", 00:13:22.885 "req_id": 1 00:13:22.885 } 00:13:22.885 Got JSON-RPC error response 00:13:22.885 response: 00:13:22.885 { 00:13:22.885 "code": -32602, 00:13:22.885 "message": "Invalid SN 'DLhdK qVq?tRFu$Uw-LT" 00:13:22.885 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:22.885 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:23.146 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:23.147 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 4 == \- ]] 00:13:23.148 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '430'\'']fUHo_pN@>jo4="1U5sns,UP/puf.(jo4="1U5sns,UP/puf.(jo4="1U5sns,UP/puf.(jo4=\"1U5sns,UP/puf.(jo4=\"1U5sns,UP/puf.(jo4=\"1U5sns,UP/puf.(jo4=\"1U5sns,UP/puf.( /dev/null' 00:13:25.456 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.367 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:27.367 00:13:27.367 real 0m13.832s 00:13:27.367 user 0m20.567s 00:13:27.367 sys 0m6.502s 00:13:27.367 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:27.367 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.367 ************************************ 00:13:27.367 END TEST nvmf_invalid 00:13:27.367 ************************************ 00:13:27.367 12:18:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:27.367 12:18:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:27.367 12:18:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:27.367 12:18:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:27.367 ************************************ 00:13:27.367 START TEST nvmf_connect_stress 00:13:27.367 ************************************ 00:13:27.367 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:27.628 * Looking for test storage... 00:13:27.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:27.628 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:27.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.629 --rc genhtml_branch_coverage=1 00:13:27.629 --rc genhtml_function_coverage=1 00:13:27.629 --rc genhtml_legend=1 00:13:27.629 --rc geninfo_all_blocks=1 00:13:27.629 --rc geninfo_unexecuted_blocks=1 00:13:27.629 00:13:27.629 ' 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:27.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.629 --rc genhtml_branch_coverage=1 00:13:27.629 --rc genhtml_function_coverage=1 00:13:27.629 --rc genhtml_legend=1 00:13:27.629 --rc geninfo_all_blocks=1 00:13:27.629 --rc geninfo_unexecuted_blocks=1 00:13:27.629 00:13:27.629 ' 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:27.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.629 --rc genhtml_branch_coverage=1 00:13:27.629 --rc genhtml_function_coverage=1 00:13:27.629 --rc genhtml_legend=1 00:13:27.629 --rc geninfo_all_blocks=1 00:13:27.629 --rc geninfo_unexecuted_blocks=1 00:13:27.629 00:13:27.629 ' 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:27.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.629 --rc genhtml_branch_coverage=1 00:13:27.629 --rc genhtml_function_coverage=1 00:13:27.629 --rc genhtml_legend=1 00:13:27.629 --rc geninfo_all_blocks=1 00:13:27.629 --rc geninfo_unexecuted_blocks=1 00:13:27.629 00:13:27.629 ' 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:27.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:27.629 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:35.768 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:35.769 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:35.769 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:35.769 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:35.769 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:35.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:13:35.769 00:13:35.769 --- 10.0.0.2 ping statistics --- 00:13:35.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.769 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:35.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:13:35.769 00:13:35.769 --- 10.0.0.1 ping statistics --- 00:13:35.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.769 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=1573666 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 1573666 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1573666 ']' 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:35.769 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.769 [2024-11-04 12:18:09.484814] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:13:35.769 [2024-11-04 12:18:09.484876] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.769 [2024-11-04 12:18:09.561035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:35.769 [2024-11-04 12:18:09.604138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.769 [2024-11-04 12:18:09.604180] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.769 [2024-11-04 12:18:09.604189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.770 [2024-11-04 12:18:09.604196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.770 [2024-11-04 12:18:09.604202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.770 [2024-11-04 12:18:09.605808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.770 [2024-11-04 12:18:09.605869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.770 [2024-11-04 12:18:09.606014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.770 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:35.770 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:35.770 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:35.770 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:35.770 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.770 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.770 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:35.770 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.030 [2024-11-04 12:18:10.340578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.030 [2024-11-04 12:18:10.365056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.030 NULL1 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1574136 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:36.030 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.031 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.031 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.291 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.291 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:36.291 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.291 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.291 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.862 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.862 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:36.862 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.862 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.862 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.122 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.122 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:37.122 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.122 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.122 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.383 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.383 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:37.383 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.383 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.383 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.643 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.643 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:37.643 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.643 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.643 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.903 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.903 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:37.903 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.903 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.903 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.473 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.473 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:38.473 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.473 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.473 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.733 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.733 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:38.733 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.733 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.733 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.994 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.994 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:38.994 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.994 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.994 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.254 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.254 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:39.254 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.254 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.254 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.514 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.514 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:39.514 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.514 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.514 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.084 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.084 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:40.084 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.084 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.084 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.344 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.344 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:40.344 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.344 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.344 12:18:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.605 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.605 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:40.605 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.605 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.605 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.865 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.865 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:40.865 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.865 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.865 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.125 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.125 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:41.125 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.125 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.125 12:18:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.695 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.695 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:41.695 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.695 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.695 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.956 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.956 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:41.956 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.956 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.956 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.218 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.218 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:42.218 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.218 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.218 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.478 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.478 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:42.478 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.478 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.478 12:18:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.049 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.050 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:43.050 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.050 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.050 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.310 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.310 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:43.310 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.310 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.310 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.569 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.569 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:43.569 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.569 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.569 12:18:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.829 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.829 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:43.829 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.829 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.829 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.089 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.089 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:44.089 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.089 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.089 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.658 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.658 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:44.658 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.658 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.658 12:18:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.918 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.918 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:44.918 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.918 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.918 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.184 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.184 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:45.184 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.184 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.184 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.444 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.444 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:45.444 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.444 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.444 12:18:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.704 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.704 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:45.704 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.704 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.704 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.275 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:46.275 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.275 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574136 00:13:46.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1574136) - No such process 00:13:46.275 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1574136 00:13:46.275 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:46.275 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:46.275 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:46.275 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:46.275 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:46.275 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:46.275 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:46.275 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:46.275 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:46.275 rmmod nvme_tcp 00:13:46.275 rmmod nvme_fabrics 00:13:46.275 rmmod nvme_keyring 00:13:46.275 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 1573666 ']' 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 1573666 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1573666 ']' 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1573666 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1573666 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1573666' 00:13:46.276 killing process with pid 1573666 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1573666 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1573666 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.276 12:18:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.826 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:48.826 00:13:48.826 real 0m20.976s 00:13:48.826 user 0m42.151s 00:13:48.826 sys 0m9.022s 00:13:48.826 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.826 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.826 ************************************ 00:13:48.826 END TEST nvmf_connect_stress 00:13:48.826 ************************************ 00:13:48.826 12:18:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:48.826 12:18:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:48.826 12:18:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.826 12:18:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:48.826 ************************************ 00:13:48.826 START TEST nvmf_fused_ordering 00:13:48.826 ************************************ 00:13:48.826 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:48.826 * Looking for test storage... 00:13:48.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.826 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:48.826 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:13:48.826 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:48.826 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:48.826 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.826 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.826 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.826 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.826 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.826 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.826 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.826 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.826 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.826 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.826 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:48.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.827 --rc genhtml_branch_coverage=1 00:13:48.827 --rc genhtml_function_coverage=1 00:13:48.827 --rc genhtml_legend=1 00:13:48.827 --rc geninfo_all_blocks=1 00:13:48.827 --rc geninfo_unexecuted_blocks=1 00:13:48.827 00:13:48.827 ' 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:48.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.827 --rc genhtml_branch_coverage=1 00:13:48.827 --rc genhtml_function_coverage=1 00:13:48.827 --rc genhtml_legend=1 00:13:48.827 --rc geninfo_all_blocks=1 00:13:48.827 --rc geninfo_unexecuted_blocks=1 00:13:48.827 00:13:48.827 ' 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:48.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.827 --rc genhtml_branch_coverage=1 00:13:48.827 --rc genhtml_function_coverage=1 00:13:48.827 --rc genhtml_legend=1 00:13:48.827 --rc geninfo_all_blocks=1 00:13:48.827 --rc geninfo_unexecuted_blocks=1 00:13:48.827 00:13:48.827 ' 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:48.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.827 --rc genhtml_branch_coverage=1 00:13:48.827 --rc genhtml_function_coverage=1 00:13:48.827 --rc genhtml_legend=1 00:13:48.827 --rc geninfo_all_blocks=1 00:13:48.827 --rc geninfo_unexecuted_blocks=1 00:13:48.827 00:13:48.827 ' 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:48.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:48.827 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:48.828 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:48.828 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:48.828 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.828 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:48.828 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:48.828 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:48.828 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.828 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.828 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.828 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:48.828 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:48.828 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:48.828 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:56.970 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:56.970 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:56.971 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:56.971 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:56.971 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:56.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:13:56.971 00:13:56.971 --- 10.0.0.2 ping statistics --- 00:13:56.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.971 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:56.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:13:56.971 00:13:56.971 --- 10.0.0.1 ping statistics --- 00:13:56.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.971 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=1580367 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 1580367 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1580367 ']' 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.971 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.971 [2024-11-04 12:18:30.653156] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:13:56.971 [2024-11-04 12:18:30.653223] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.971 [2024-11-04 12:18:30.745884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.971 [2024-11-04 12:18:30.797454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.972 [2024-11-04 12:18:30.797513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.972 [2024-11-04 12:18:30.797522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.972 [2024-11-04 12:18:30.797529] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.972 [2024-11-04 12:18:30.797536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.972 [2024-11-04 12:18:30.798324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.972 [2024-11-04 12:18:31.513392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.972 [2024-11-04 12:18:31.529676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.972 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:57.232 NULL1 00:13:57.232 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.232 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:57.232 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.232 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:57.232 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.232 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:57.232 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.232 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:57.232 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.232 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:57.232 [2024-11-04 12:18:31.587300] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:13:57.232 [2024-11-04 12:18:31.587345] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580619 ] 00:13:57.804 Attached to nqn.2016-06.io.spdk:cnode1 00:13:57.804 Namespace ID: 1 size: 1GB 00:13:57.804 fused_ordering(0) 00:13:57.804 fused_ordering(1) 00:13:57.804 fused_ordering(2) 00:13:57.804 fused_ordering(3) 00:13:57.804 fused_ordering(4) 00:13:57.804 fused_ordering(5) 00:13:57.804 fused_ordering(6) 00:13:57.804 fused_ordering(7) 00:13:57.804 fused_ordering(8) 00:13:57.804 fused_ordering(9) 00:13:57.804 fused_ordering(10) 00:13:57.804 fused_ordering(11) 00:13:57.804 fused_ordering(12) 00:13:57.804 fused_ordering(13) 00:13:57.804 fused_ordering(14) 00:13:57.804 fused_ordering(15) 00:13:57.804 fused_ordering(16) 00:13:57.804 fused_ordering(17) 00:13:57.804 fused_ordering(18) 00:13:57.804 fused_ordering(19) 00:13:57.804 fused_ordering(20) 00:13:57.804 fused_ordering(21) 00:13:57.804 fused_ordering(22) 00:13:57.804 fused_ordering(23) 00:13:57.804 fused_ordering(24) 00:13:57.804 fused_ordering(25) 00:13:57.804 fused_ordering(26) 00:13:57.804 fused_ordering(27) 00:13:57.804 fused_ordering(28) 00:13:57.804 fused_ordering(29) 00:13:57.804 fused_ordering(30) 00:13:57.804 fused_ordering(31) 00:13:57.804 fused_ordering(32) 00:13:57.804 fused_ordering(33) 00:13:57.804 fused_ordering(34) 00:13:57.804 fused_ordering(35) 00:13:57.804 fused_ordering(36) 00:13:57.804 fused_ordering(37) 00:13:57.804 fused_ordering(38) 00:13:57.804 fused_ordering(39) 00:13:57.804 fused_ordering(40) 00:13:57.804 fused_ordering(41) 00:13:57.804 fused_ordering(42) 00:13:57.804 fused_ordering(43) 00:13:57.804 fused_ordering(44) 00:13:57.804 fused_ordering(45) 00:13:57.804 fused_ordering(46) 00:13:57.804 fused_ordering(47) 00:13:57.804 fused_ordering(48) 00:13:57.804 fused_ordering(49) 00:13:57.804 fused_ordering(50) 00:13:57.804 fused_ordering(51) 00:13:57.804 fused_ordering(52) 00:13:57.804 fused_ordering(53) 00:13:57.804 fused_ordering(54) 00:13:57.804 fused_ordering(55) 00:13:57.804 fused_ordering(56) 00:13:57.804 fused_ordering(57) 00:13:57.804 fused_ordering(58) 00:13:57.804 fused_ordering(59) 00:13:57.804 fused_ordering(60) 00:13:57.804 fused_ordering(61) 00:13:57.804 fused_ordering(62) 00:13:57.804 fused_ordering(63) 00:13:57.804 fused_ordering(64) 00:13:57.804 fused_ordering(65) 00:13:57.804 fused_ordering(66) 00:13:57.804 fused_ordering(67) 00:13:57.804 fused_ordering(68) 00:13:57.804 fused_ordering(69) 00:13:57.804 fused_ordering(70) 00:13:57.804 fused_ordering(71) 00:13:57.804 fused_ordering(72) 00:13:57.804 fused_ordering(73) 00:13:57.804 fused_ordering(74) 00:13:57.804 fused_ordering(75) 00:13:57.804 fused_ordering(76) 00:13:57.804 fused_ordering(77) 00:13:57.804 fused_ordering(78) 00:13:57.804 fused_ordering(79) 00:13:57.804 fused_ordering(80) 00:13:57.804 fused_ordering(81) 00:13:57.804 fused_ordering(82) 00:13:57.804 fused_ordering(83) 00:13:57.804 fused_ordering(84) 00:13:57.804 fused_ordering(85) 00:13:57.804 fused_ordering(86) 00:13:57.804 fused_ordering(87) 00:13:57.804 fused_ordering(88) 00:13:57.804 fused_ordering(89) 00:13:57.804 fused_ordering(90) 00:13:57.805 fused_ordering(91) 00:13:57.805 fused_ordering(92) 00:13:57.805 fused_ordering(93) 00:13:57.805 fused_ordering(94) 00:13:57.805 fused_ordering(95) 00:13:57.805 fused_ordering(96) 00:13:57.805 fused_ordering(97) 00:13:57.805 fused_ordering(98) 00:13:57.805 fused_ordering(99) 00:13:57.805 fused_ordering(100) 00:13:57.805 fused_ordering(101) 00:13:57.805 fused_ordering(102) 00:13:57.805 fused_ordering(103) 00:13:57.805 fused_ordering(104) 00:13:57.805 fused_ordering(105) 00:13:57.805 fused_ordering(106) 00:13:57.805 fused_ordering(107) 00:13:57.805 fused_ordering(108) 00:13:57.805 fused_ordering(109) 00:13:57.805 fused_ordering(110) 00:13:57.805 fused_ordering(111) 00:13:57.805 fused_ordering(112) 00:13:57.805 fused_ordering(113) 00:13:57.805 fused_ordering(114) 00:13:57.805 fused_ordering(115) 00:13:57.805 fused_ordering(116) 00:13:57.805 fused_ordering(117) 00:13:57.805 fused_ordering(118) 00:13:57.805 fused_ordering(119) 00:13:57.805 fused_ordering(120) 00:13:57.805 fused_ordering(121) 00:13:57.805 fused_ordering(122) 00:13:57.805 fused_ordering(123) 00:13:57.805 fused_ordering(124) 00:13:57.805 fused_ordering(125) 00:13:57.805 fused_ordering(126) 00:13:57.805 fused_ordering(127) 00:13:57.805 fused_ordering(128) 00:13:57.805 fused_ordering(129) 00:13:57.805 fused_ordering(130) 00:13:57.805 fused_ordering(131) 00:13:57.805 fused_ordering(132) 00:13:57.805 fused_ordering(133) 00:13:57.805 fused_ordering(134) 00:13:57.805 fused_ordering(135) 00:13:57.805 fused_ordering(136) 00:13:57.805 fused_ordering(137) 00:13:57.805 fused_ordering(138) 00:13:57.805 fused_ordering(139) 00:13:57.805 fused_ordering(140) 00:13:57.805 fused_ordering(141) 00:13:57.805 fused_ordering(142) 00:13:57.805 fused_ordering(143) 00:13:57.805 fused_ordering(144) 00:13:57.805 fused_ordering(145) 00:13:57.805 fused_ordering(146) 00:13:57.805 fused_ordering(147) 00:13:57.805 fused_ordering(148) 00:13:57.805 fused_ordering(149) 00:13:57.805 fused_ordering(150) 00:13:57.805 fused_ordering(151) 00:13:57.805 fused_ordering(152) 00:13:57.805 fused_ordering(153) 00:13:57.805 fused_ordering(154) 00:13:57.805 fused_ordering(155) 00:13:57.805 fused_ordering(156) 00:13:57.805 fused_ordering(157) 00:13:57.805 fused_ordering(158) 00:13:57.805 fused_ordering(159) 00:13:57.805 fused_ordering(160) 00:13:57.805 fused_ordering(161) 00:13:57.805 fused_ordering(162) 00:13:57.805 fused_ordering(163) 00:13:57.805 fused_ordering(164) 00:13:57.805 fused_ordering(165) 00:13:57.805 fused_ordering(166) 00:13:57.805 fused_ordering(167) 00:13:57.805 fused_ordering(168) 00:13:57.805 fused_ordering(169) 00:13:57.805 fused_ordering(170) 00:13:57.805 fused_ordering(171) 00:13:57.805 fused_ordering(172) 00:13:57.805 fused_ordering(173) 00:13:57.805 fused_ordering(174) 00:13:57.805 fused_ordering(175) 00:13:57.805 fused_ordering(176) 00:13:57.805 fused_ordering(177) 00:13:57.805 fused_ordering(178) 00:13:57.805 fused_ordering(179) 00:13:57.805 fused_ordering(180) 00:13:57.805 fused_ordering(181) 00:13:57.805 fused_ordering(182) 00:13:57.805 fused_ordering(183) 00:13:57.805 fused_ordering(184) 00:13:57.805 fused_ordering(185) 00:13:57.805 fused_ordering(186) 00:13:57.805 fused_ordering(187) 00:13:57.805 fused_ordering(188) 00:13:57.805 fused_ordering(189) 00:13:57.805 fused_ordering(190) 00:13:57.805 fused_ordering(191) 00:13:57.805 fused_ordering(192) 00:13:57.805 fused_ordering(193) 00:13:57.805 fused_ordering(194) 00:13:57.805 fused_ordering(195) 00:13:57.805 fused_ordering(196) 00:13:57.805 fused_ordering(197) 00:13:57.805 fused_ordering(198) 00:13:57.805 fused_ordering(199) 00:13:57.805 fused_ordering(200) 00:13:57.805 fused_ordering(201) 00:13:57.805 fused_ordering(202) 00:13:57.805 fused_ordering(203) 00:13:57.805 fused_ordering(204) 00:13:57.805 fused_ordering(205) 00:13:58.066 fused_ordering(206) 00:13:58.067 fused_ordering(207) 00:13:58.067 fused_ordering(208) 00:13:58.067 fused_ordering(209) 00:13:58.067 fused_ordering(210) 00:13:58.067 fused_ordering(211) 00:13:58.067 fused_ordering(212) 00:13:58.067 fused_ordering(213) 00:13:58.067 fused_ordering(214) 00:13:58.067 fused_ordering(215) 00:13:58.067 fused_ordering(216) 00:13:58.067 fused_ordering(217) 00:13:58.067 fused_ordering(218) 00:13:58.067 fused_ordering(219) 00:13:58.067 fused_ordering(220) 00:13:58.067 fused_ordering(221) 00:13:58.067 fused_ordering(222) 00:13:58.067 fused_ordering(223) 00:13:58.067 fused_ordering(224) 00:13:58.067 fused_ordering(225) 00:13:58.067 fused_ordering(226) 00:13:58.067 fused_ordering(227) 00:13:58.067 fused_ordering(228) 00:13:58.067 fused_ordering(229) 00:13:58.067 fused_ordering(230) 00:13:58.067 fused_ordering(231) 00:13:58.067 fused_ordering(232) 00:13:58.067 fused_ordering(233) 00:13:58.067 fused_ordering(234) 00:13:58.067 fused_ordering(235) 00:13:58.067 fused_ordering(236) 00:13:58.067 fused_ordering(237) 00:13:58.067 fused_ordering(238) 00:13:58.067 fused_ordering(239) 00:13:58.067 fused_ordering(240) 00:13:58.067 fused_ordering(241) 00:13:58.067 fused_ordering(242) 00:13:58.067 fused_ordering(243) 00:13:58.067 fused_ordering(244) 00:13:58.067 fused_ordering(245) 00:13:58.067 fused_ordering(246) 00:13:58.067 fused_ordering(247) 00:13:58.067 fused_ordering(248) 00:13:58.067 fused_ordering(249) 00:13:58.067 fused_ordering(250) 00:13:58.067 fused_ordering(251) 00:13:58.067 fused_ordering(252) 00:13:58.067 fused_ordering(253) 00:13:58.067 fused_ordering(254) 00:13:58.067 fused_ordering(255) 00:13:58.067 fused_ordering(256) 00:13:58.067 fused_ordering(257) 00:13:58.067 fused_ordering(258) 00:13:58.067 fused_ordering(259) 00:13:58.067 fused_ordering(260) 00:13:58.067 fused_ordering(261) 00:13:58.067 fused_ordering(262) 00:13:58.067 fused_ordering(263) 00:13:58.067 fused_ordering(264) 00:13:58.067 fused_ordering(265) 00:13:58.067 fused_ordering(266) 00:13:58.067 fused_ordering(267) 00:13:58.067 fused_ordering(268) 00:13:58.067 fused_ordering(269) 00:13:58.067 fused_ordering(270) 00:13:58.067 fused_ordering(271) 00:13:58.067 fused_ordering(272) 00:13:58.067 fused_ordering(273) 00:13:58.067 fused_ordering(274) 00:13:58.067 fused_ordering(275) 00:13:58.067 fused_ordering(276) 00:13:58.067 fused_ordering(277) 00:13:58.067 fused_ordering(278) 00:13:58.067 fused_ordering(279) 00:13:58.067 fused_ordering(280) 00:13:58.067 fused_ordering(281) 00:13:58.067 fused_ordering(282) 00:13:58.067 fused_ordering(283) 00:13:58.067 fused_ordering(284) 00:13:58.067 fused_ordering(285) 00:13:58.067 fused_ordering(286) 00:13:58.067 fused_ordering(287) 00:13:58.067 fused_ordering(288) 00:13:58.067 fused_ordering(289) 00:13:58.067 fused_ordering(290) 00:13:58.067 fused_ordering(291) 00:13:58.067 fused_ordering(292) 00:13:58.067 fused_ordering(293) 00:13:58.067 fused_ordering(294) 00:13:58.067 fused_ordering(295) 00:13:58.067 fused_ordering(296) 00:13:58.067 fused_ordering(297) 00:13:58.067 fused_ordering(298) 00:13:58.067 fused_ordering(299) 00:13:58.067 fused_ordering(300) 00:13:58.067 fused_ordering(301) 00:13:58.067 fused_ordering(302) 00:13:58.067 fused_ordering(303) 00:13:58.067 fused_ordering(304) 00:13:58.067 fused_ordering(305) 00:13:58.067 fused_ordering(306) 00:13:58.067 fused_ordering(307) 00:13:58.067 fused_ordering(308) 00:13:58.067 fused_ordering(309) 00:13:58.067 fused_ordering(310) 00:13:58.067 fused_ordering(311) 00:13:58.067 fused_ordering(312) 00:13:58.067 fused_ordering(313) 00:13:58.067 fused_ordering(314) 00:13:58.067 fused_ordering(315) 00:13:58.067 fused_ordering(316) 00:13:58.067 fused_ordering(317) 00:13:58.067 fused_ordering(318) 00:13:58.067 fused_ordering(319) 00:13:58.067 fused_ordering(320) 00:13:58.067 fused_ordering(321) 00:13:58.067 fused_ordering(322) 00:13:58.067 fused_ordering(323) 00:13:58.067 fused_ordering(324) 00:13:58.067 fused_ordering(325) 00:13:58.067 fused_ordering(326) 00:13:58.067 fused_ordering(327) 00:13:58.067 fused_ordering(328) 00:13:58.067 fused_ordering(329) 00:13:58.067 fused_ordering(330) 00:13:58.067 fused_ordering(331) 00:13:58.067 fused_ordering(332) 00:13:58.067 fused_ordering(333) 00:13:58.067 fused_ordering(334) 00:13:58.067 fused_ordering(335) 00:13:58.067 fused_ordering(336) 00:13:58.067 fused_ordering(337) 00:13:58.067 fused_ordering(338) 00:13:58.067 fused_ordering(339) 00:13:58.067 fused_ordering(340) 00:13:58.067 fused_ordering(341) 00:13:58.067 fused_ordering(342) 00:13:58.067 fused_ordering(343) 00:13:58.067 fused_ordering(344) 00:13:58.067 fused_ordering(345) 00:13:58.067 fused_ordering(346) 00:13:58.067 fused_ordering(347) 00:13:58.067 fused_ordering(348) 00:13:58.067 fused_ordering(349) 00:13:58.067 fused_ordering(350) 00:13:58.067 fused_ordering(351) 00:13:58.067 fused_ordering(352) 00:13:58.067 fused_ordering(353) 00:13:58.067 fused_ordering(354) 00:13:58.067 fused_ordering(355) 00:13:58.067 fused_ordering(356) 00:13:58.067 fused_ordering(357) 00:13:58.067 fused_ordering(358) 00:13:58.067 fused_ordering(359) 00:13:58.067 fused_ordering(360) 00:13:58.067 fused_ordering(361) 00:13:58.067 fused_ordering(362) 00:13:58.067 fused_ordering(363) 00:13:58.067 fused_ordering(364) 00:13:58.067 fused_ordering(365) 00:13:58.067 fused_ordering(366) 00:13:58.067 fused_ordering(367) 00:13:58.067 fused_ordering(368) 00:13:58.067 fused_ordering(369) 00:13:58.067 fused_ordering(370) 00:13:58.067 fused_ordering(371) 00:13:58.067 fused_ordering(372) 00:13:58.067 fused_ordering(373) 00:13:58.067 fused_ordering(374) 00:13:58.067 fused_ordering(375) 00:13:58.067 fused_ordering(376) 00:13:58.067 fused_ordering(377) 00:13:58.067 fused_ordering(378) 00:13:58.067 fused_ordering(379) 00:13:58.067 fused_ordering(380) 00:13:58.067 fused_ordering(381) 00:13:58.067 fused_ordering(382) 00:13:58.067 fused_ordering(383) 00:13:58.067 fused_ordering(384) 00:13:58.067 fused_ordering(385) 00:13:58.067 fused_ordering(386) 00:13:58.067 fused_ordering(387) 00:13:58.067 fused_ordering(388) 00:13:58.067 fused_ordering(389) 00:13:58.067 fused_ordering(390) 00:13:58.067 fused_ordering(391) 00:13:58.067 fused_ordering(392) 00:13:58.067 fused_ordering(393) 00:13:58.067 fused_ordering(394) 00:13:58.067 fused_ordering(395) 00:13:58.067 fused_ordering(396) 00:13:58.067 fused_ordering(397) 00:13:58.067 fused_ordering(398) 00:13:58.067 fused_ordering(399) 00:13:58.067 fused_ordering(400) 00:13:58.067 fused_ordering(401) 00:13:58.067 fused_ordering(402) 00:13:58.067 fused_ordering(403) 00:13:58.067 fused_ordering(404) 00:13:58.067 fused_ordering(405) 00:13:58.067 fused_ordering(406) 00:13:58.067 fused_ordering(407) 00:13:58.067 fused_ordering(408) 00:13:58.067 fused_ordering(409) 00:13:58.067 fused_ordering(410) 00:13:58.328 fused_ordering(411) 00:13:58.328 fused_ordering(412) 00:13:58.328 fused_ordering(413) 00:13:58.328 fused_ordering(414) 00:13:58.328 fused_ordering(415) 00:13:58.328 fused_ordering(416) 00:13:58.328 fused_ordering(417) 00:13:58.328 fused_ordering(418) 00:13:58.328 fused_ordering(419) 00:13:58.328 fused_ordering(420) 00:13:58.328 fused_ordering(421) 00:13:58.328 fused_ordering(422) 00:13:58.328 fused_ordering(423) 00:13:58.328 fused_ordering(424) 00:13:58.328 fused_ordering(425) 00:13:58.328 fused_ordering(426) 00:13:58.328 fused_ordering(427) 00:13:58.328 fused_ordering(428) 00:13:58.328 fused_ordering(429) 00:13:58.328 fused_ordering(430) 00:13:58.328 fused_ordering(431) 00:13:58.328 fused_ordering(432) 00:13:58.328 fused_ordering(433) 00:13:58.328 fused_ordering(434) 00:13:58.328 fused_ordering(435) 00:13:58.328 fused_ordering(436) 00:13:58.328 fused_ordering(437) 00:13:58.328 fused_ordering(438) 00:13:58.328 fused_ordering(439) 00:13:58.328 fused_ordering(440) 00:13:58.328 fused_ordering(441) 00:13:58.328 fused_ordering(442) 00:13:58.328 fused_ordering(443) 00:13:58.328 fused_ordering(444) 00:13:58.328 fused_ordering(445) 00:13:58.328 fused_ordering(446) 00:13:58.328 fused_ordering(447) 00:13:58.328 fused_ordering(448) 00:13:58.328 fused_ordering(449) 00:13:58.328 fused_ordering(450) 00:13:58.328 fused_ordering(451) 00:13:58.328 fused_ordering(452) 00:13:58.328 fused_ordering(453) 00:13:58.328 fused_ordering(454) 00:13:58.328 fused_ordering(455) 00:13:58.328 fused_ordering(456) 00:13:58.328 fused_ordering(457) 00:13:58.328 fused_ordering(458) 00:13:58.328 fused_ordering(459) 00:13:58.328 fused_ordering(460) 00:13:58.328 fused_ordering(461) 00:13:58.328 fused_ordering(462) 00:13:58.328 fused_ordering(463) 00:13:58.328 fused_ordering(464) 00:13:58.328 fused_ordering(465) 00:13:58.328 fused_ordering(466) 00:13:58.329 fused_ordering(467) 00:13:58.329 fused_ordering(468) 00:13:58.329 fused_ordering(469) 00:13:58.329 fused_ordering(470) 00:13:58.329 fused_ordering(471) 00:13:58.329 fused_ordering(472) 00:13:58.329 fused_ordering(473) 00:13:58.329 fused_ordering(474) 00:13:58.329 fused_ordering(475) 00:13:58.329 fused_ordering(476) 00:13:58.329 fused_ordering(477) 00:13:58.329 fused_ordering(478) 00:13:58.329 fused_ordering(479) 00:13:58.329 fused_ordering(480) 00:13:58.329 fused_ordering(481) 00:13:58.329 fused_ordering(482) 00:13:58.329 fused_ordering(483) 00:13:58.329 fused_ordering(484) 00:13:58.329 fused_ordering(485) 00:13:58.329 fused_ordering(486) 00:13:58.329 fused_ordering(487) 00:13:58.329 fused_ordering(488) 00:13:58.329 fused_ordering(489) 00:13:58.329 fused_ordering(490) 00:13:58.329 fused_ordering(491) 00:13:58.329 fused_ordering(492) 00:13:58.329 fused_ordering(493) 00:13:58.329 fused_ordering(494) 00:13:58.329 fused_ordering(495) 00:13:58.329 fused_ordering(496) 00:13:58.329 fused_ordering(497) 00:13:58.329 fused_ordering(498) 00:13:58.329 fused_ordering(499) 00:13:58.329 fused_ordering(500) 00:13:58.329 fused_ordering(501) 00:13:58.329 fused_ordering(502) 00:13:58.329 fused_ordering(503) 00:13:58.329 fused_ordering(504) 00:13:58.329 fused_ordering(505) 00:13:58.329 fused_ordering(506) 00:13:58.329 fused_ordering(507) 00:13:58.329 fused_ordering(508) 00:13:58.329 fused_ordering(509) 00:13:58.329 fused_ordering(510) 00:13:58.329 fused_ordering(511) 00:13:58.329 fused_ordering(512) 00:13:58.329 fused_ordering(513) 00:13:58.329 fused_ordering(514) 00:13:58.329 fused_ordering(515) 00:13:58.329 fused_ordering(516) 00:13:58.329 fused_ordering(517) 00:13:58.329 fused_ordering(518) 00:13:58.329 fused_ordering(519) 00:13:58.329 fused_ordering(520) 00:13:58.329 fused_ordering(521) 00:13:58.329 fused_ordering(522) 00:13:58.329 fused_ordering(523) 00:13:58.329 fused_ordering(524) 00:13:58.329 fused_ordering(525) 00:13:58.329 fused_ordering(526) 00:13:58.329 fused_ordering(527) 00:13:58.329 fused_ordering(528) 00:13:58.329 fused_ordering(529) 00:13:58.329 fused_ordering(530) 00:13:58.329 fused_ordering(531) 00:13:58.329 fused_ordering(532) 00:13:58.329 fused_ordering(533) 00:13:58.329 fused_ordering(534) 00:13:58.329 fused_ordering(535) 00:13:58.329 fused_ordering(536) 00:13:58.329 fused_ordering(537) 00:13:58.329 fused_ordering(538) 00:13:58.329 fused_ordering(539) 00:13:58.329 fused_ordering(540) 00:13:58.329 fused_ordering(541) 00:13:58.329 fused_ordering(542) 00:13:58.329 fused_ordering(543) 00:13:58.329 fused_ordering(544) 00:13:58.329 fused_ordering(545) 00:13:58.329 fused_ordering(546) 00:13:58.329 fused_ordering(547) 00:13:58.329 fused_ordering(548) 00:13:58.329 fused_ordering(549) 00:13:58.329 fused_ordering(550) 00:13:58.329 fused_ordering(551) 00:13:58.329 fused_ordering(552) 00:13:58.329 fused_ordering(553) 00:13:58.329 fused_ordering(554) 00:13:58.329 fused_ordering(555) 00:13:58.329 fused_ordering(556) 00:13:58.329 fused_ordering(557) 00:13:58.329 fused_ordering(558) 00:13:58.329 fused_ordering(559) 00:13:58.329 fused_ordering(560) 00:13:58.329 fused_ordering(561) 00:13:58.329 fused_ordering(562) 00:13:58.329 fused_ordering(563) 00:13:58.329 fused_ordering(564) 00:13:58.329 fused_ordering(565) 00:13:58.329 fused_ordering(566) 00:13:58.329 fused_ordering(567) 00:13:58.329 fused_ordering(568) 00:13:58.329 fused_ordering(569) 00:13:58.329 fused_ordering(570) 00:13:58.329 fused_ordering(571) 00:13:58.329 fused_ordering(572) 00:13:58.329 fused_ordering(573) 00:13:58.329 fused_ordering(574) 00:13:58.329 fused_ordering(575) 00:13:58.329 fused_ordering(576) 00:13:58.329 fused_ordering(577) 00:13:58.329 fused_ordering(578) 00:13:58.329 fused_ordering(579) 00:13:58.329 fused_ordering(580) 00:13:58.329 fused_ordering(581) 00:13:58.329 fused_ordering(582) 00:13:58.329 fused_ordering(583) 00:13:58.329 fused_ordering(584) 00:13:58.329 fused_ordering(585) 00:13:58.329 fused_ordering(586) 00:13:58.329 fused_ordering(587) 00:13:58.329 fused_ordering(588) 00:13:58.329 fused_ordering(589) 00:13:58.329 fused_ordering(590) 00:13:58.329 fused_ordering(591) 00:13:58.329 fused_ordering(592) 00:13:58.329 fused_ordering(593) 00:13:58.329 fused_ordering(594) 00:13:58.329 fused_ordering(595) 00:13:58.329 fused_ordering(596) 00:13:58.329 fused_ordering(597) 00:13:58.329 fused_ordering(598) 00:13:58.329 fused_ordering(599) 00:13:58.329 fused_ordering(600) 00:13:58.329 fused_ordering(601) 00:13:58.329 fused_ordering(602) 00:13:58.329 fused_ordering(603) 00:13:58.329 fused_ordering(604) 00:13:58.329 fused_ordering(605) 00:13:58.329 fused_ordering(606) 00:13:58.329 fused_ordering(607) 00:13:58.329 fused_ordering(608) 00:13:58.329 fused_ordering(609) 00:13:58.329 fused_ordering(610) 00:13:58.329 fused_ordering(611) 00:13:58.329 fused_ordering(612) 00:13:58.329 fused_ordering(613) 00:13:58.329 fused_ordering(614) 00:13:58.329 fused_ordering(615) 00:13:58.901 fused_ordering(616) 00:13:58.901 fused_ordering(617) 00:13:58.901 fused_ordering(618) 00:13:58.901 fused_ordering(619) 00:13:58.901 fused_ordering(620) 00:13:58.901 fused_ordering(621) 00:13:58.901 fused_ordering(622) 00:13:58.901 fused_ordering(623) 00:13:58.901 fused_ordering(624) 00:13:58.901 fused_ordering(625) 00:13:58.901 fused_ordering(626) 00:13:58.901 fused_ordering(627) 00:13:58.901 fused_ordering(628) 00:13:58.901 fused_ordering(629) 00:13:58.901 fused_ordering(630) 00:13:58.901 fused_ordering(631) 00:13:58.901 fused_ordering(632) 00:13:58.901 fused_ordering(633) 00:13:58.901 fused_ordering(634) 00:13:58.901 fused_ordering(635) 00:13:58.901 fused_ordering(636) 00:13:58.901 fused_ordering(637) 00:13:58.901 fused_ordering(638) 00:13:58.901 fused_ordering(639) 00:13:58.901 fused_ordering(640) 00:13:58.901 fused_ordering(641) 00:13:58.901 fused_ordering(642) 00:13:58.901 fused_ordering(643) 00:13:58.901 fused_ordering(644) 00:13:58.901 fused_ordering(645) 00:13:58.901 fused_ordering(646) 00:13:58.901 fused_ordering(647) 00:13:58.901 fused_ordering(648) 00:13:58.901 fused_ordering(649) 00:13:58.901 fused_ordering(650) 00:13:58.901 fused_ordering(651) 00:13:58.901 fused_ordering(652) 00:13:58.901 fused_ordering(653) 00:13:58.901 fused_ordering(654) 00:13:58.901 fused_ordering(655) 00:13:58.901 fused_ordering(656) 00:13:58.901 fused_ordering(657) 00:13:58.901 fused_ordering(658) 00:13:58.901 fused_ordering(659) 00:13:58.901 fused_ordering(660) 00:13:58.901 fused_ordering(661) 00:13:58.901 fused_ordering(662) 00:13:58.901 fused_ordering(663) 00:13:58.901 fused_ordering(664) 00:13:58.901 fused_ordering(665) 00:13:58.901 fused_ordering(666) 00:13:58.901 fused_ordering(667) 00:13:58.901 fused_ordering(668) 00:13:58.901 fused_ordering(669) 00:13:58.901 fused_ordering(670) 00:13:58.901 fused_ordering(671) 00:13:58.901 fused_ordering(672) 00:13:58.901 fused_ordering(673) 00:13:58.901 fused_ordering(674) 00:13:58.901 fused_ordering(675) 00:13:58.901 fused_ordering(676) 00:13:58.901 fused_ordering(677) 00:13:58.901 fused_ordering(678) 00:13:58.901 fused_ordering(679) 00:13:58.901 fused_ordering(680) 00:13:58.901 fused_ordering(681) 00:13:58.901 fused_ordering(682) 00:13:58.901 fused_ordering(683) 00:13:58.901 fused_ordering(684) 00:13:58.901 fused_ordering(685) 00:13:58.901 fused_ordering(686) 00:13:58.901 fused_ordering(687) 00:13:58.901 fused_ordering(688) 00:13:58.901 fused_ordering(689) 00:13:58.901 fused_ordering(690) 00:13:58.901 fused_ordering(691) 00:13:58.901 fused_ordering(692) 00:13:58.901 fused_ordering(693) 00:13:58.901 fused_ordering(694) 00:13:58.901 fused_ordering(695) 00:13:58.901 fused_ordering(696) 00:13:58.901 fused_ordering(697) 00:13:58.901 fused_ordering(698) 00:13:58.901 fused_ordering(699) 00:13:58.901 fused_ordering(700) 00:13:58.901 fused_ordering(701) 00:13:58.901 fused_ordering(702) 00:13:58.901 fused_ordering(703) 00:13:58.901 fused_ordering(704) 00:13:58.901 fused_ordering(705) 00:13:58.901 fused_ordering(706) 00:13:58.901 fused_ordering(707) 00:13:58.901 fused_ordering(708) 00:13:58.901 fused_ordering(709) 00:13:58.901 fused_ordering(710) 00:13:58.901 fused_ordering(711) 00:13:58.901 fused_ordering(712) 00:13:58.901 fused_ordering(713) 00:13:58.901 fused_ordering(714) 00:13:58.901 fused_ordering(715) 00:13:58.901 fused_ordering(716) 00:13:58.901 fused_ordering(717) 00:13:58.902 fused_ordering(718) 00:13:58.902 fused_ordering(719) 00:13:58.902 fused_ordering(720) 00:13:58.902 fused_ordering(721) 00:13:58.902 fused_ordering(722) 00:13:58.902 fused_ordering(723) 00:13:58.902 fused_ordering(724) 00:13:58.902 fused_ordering(725) 00:13:58.902 fused_ordering(726) 00:13:58.902 fused_ordering(727) 00:13:58.902 fused_ordering(728) 00:13:58.902 fused_ordering(729) 00:13:58.902 fused_ordering(730) 00:13:58.902 fused_ordering(731) 00:13:58.902 fused_ordering(732) 00:13:58.902 fused_ordering(733) 00:13:58.902 fused_ordering(734) 00:13:58.902 fused_ordering(735) 00:13:58.902 fused_ordering(736) 00:13:58.902 fused_ordering(737) 00:13:58.902 fused_ordering(738) 00:13:58.902 fused_ordering(739) 00:13:58.902 fused_ordering(740) 00:13:58.902 fused_ordering(741) 00:13:58.902 fused_ordering(742) 00:13:58.902 fused_ordering(743) 00:13:58.902 fused_ordering(744) 00:13:58.902 fused_ordering(745) 00:13:58.902 fused_ordering(746) 00:13:58.902 fused_ordering(747) 00:13:58.902 fused_ordering(748) 00:13:58.902 fused_ordering(749) 00:13:58.902 fused_ordering(750) 00:13:58.902 fused_ordering(751) 00:13:58.902 fused_ordering(752) 00:13:58.902 fused_ordering(753) 00:13:58.902 fused_ordering(754) 00:13:58.902 fused_ordering(755) 00:13:58.902 fused_ordering(756) 00:13:58.902 fused_ordering(757) 00:13:58.902 fused_ordering(758) 00:13:58.902 fused_ordering(759) 00:13:58.902 fused_ordering(760) 00:13:58.902 fused_ordering(761) 00:13:58.902 fused_ordering(762) 00:13:58.902 fused_ordering(763) 00:13:58.902 fused_ordering(764) 00:13:58.902 fused_ordering(765) 00:13:58.902 fused_ordering(766) 00:13:58.902 fused_ordering(767) 00:13:58.902 fused_ordering(768) 00:13:58.902 fused_ordering(769) 00:13:58.902 fused_ordering(770) 00:13:58.902 fused_ordering(771) 00:13:58.902 fused_ordering(772) 00:13:58.902 fused_ordering(773) 00:13:58.902 fused_ordering(774) 00:13:58.902 fused_ordering(775) 00:13:58.902 fused_ordering(776) 00:13:58.902 fused_ordering(777) 00:13:58.902 fused_ordering(778) 00:13:58.902 fused_ordering(779) 00:13:58.902 fused_ordering(780) 00:13:58.902 fused_ordering(781) 00:13:58.902 fused_ordering(782) 00:13:58.902 fused_ordering(783) 00:13:58.902 fused_ordering(784) 00:13:58.902 fused_ordering(785) 00:13:58.902 fused_ordering(786) 00:13:58.902 fused_ordering(787) 00:13:58.902 fused_ordering(788) 00:13:58.902 fused_ordering(789) 00:13:58.902 fused_ordering(790) 00:13:58.902 fused_ordering(791) 00:13:58.902 fused_ordering(792) 00:13:58.902 fused_ordering(793) 00:13:58.902 fused_ordering(794) 00:13:58.902 fused_ordering(795) 00:13:58.902 fused_ordering(796) 00:13:58.902 fused_ordering(797) 00:13:58.902 fused_ordering(798) 00:13:58.902 fused_ordering(799) 00:13:58.902 fused_ordering(800) 00:13:58.902 fused_ordering(801) 00:13:58.902 fused_ordering(802) 00:13:58.902 fused_ordering(803) 00:13:58.902 fused_ordering(804) 00:13:58.902 fused_ordering(805) 00:13:58.902 fused_ordering(806) 00:13:58.902 fused_ordering(807) 00:13:58.902 fused_ordering(808) 00:13:58.902 fused_ordering(809) 00:13:58.902 fused_ordering(810) 00:13:58.902 fused_ordering(811) 00:13:58.902 fused_ordering(812) 00:13:58.902 fused_ordering(813) 00:13:58.902 fused_ordering(814) 00:13:58.902 fused_ordering(815) 00:13:58.902 fused_ordering(816) 00:13:58.902 fused_ordering(817) 00:13:58.902 fused_ordering(818) 00:13:58.902 fused_ordering(819) 00:13:58.902 fused_ordering(820) 00:13:59.473 fused_ordering(821) 00:13:59.473 fused_ordering(822) 00:13:59.473 fused_ordering(823) 00:13:59.473 fused_ordering(824) 00:13:59.473 fused_ordering(825) 00:13:59.473 fused_ordering(826) 00:13:59.473 fused_ordering(827) 00:13:59.473 fused_ordering(828) 00:13:59.473 fused_ordering(829) 00:13:59.473 fused_ordering(830) 00:13:59.473 fused_ordering(831) 00:13:59.474 fused_ordering(832) 00:13:59.474 fused_ordering(833) 00:13:59.474 fused_ordering(834) 00:13:59.474 fused_ordering(835) 00:13:59.474 fused_ordering(836) 00:13:59.474 fused_ordering(837) 00:13:59.474 fused_ordering(838) 00:13:59.474 fused_ordering(839) 00:13:59.474 fused_ordering(840) 00:13:59.474 fused_ordering(841) 00:13:59.474 fused_ordering(842) 00:13:59.474 fused_ordering(843) 00:13:59.474 fused_ordering(844) 00:13:59.474 fused_ordering(845) 00:13:59.474 fused_ordering(846) 00:13:59.474 fused_ordering(847) 00:13:59.474 fused_ordering(848) 00:13:59.474 fused_ordering(849) 00:13:59.474 fused_ordering(850) 00:13:59.474 fused_ordering(851) 00:13:59.474 fused_ordering(852) 00:13:59.474 fused_ordering(853) 00:13:59.474 fused_ordering(854) 00:13:59.474 fused_ordering(855) 00:13:59.474 fused_ordering(856) 00:13:59.474 fused_ordering(857) 00:13:59.474 fused_ordering(858) 00:13:59.474 fused_ordering(859) 00:13:59.474 fused_ordering(860) 00:13:59.474 fused_ordering(861) 00:13:59.474 fused_ordering(862) 00:13:59.474 fused_ordering(863) 00:13:59.474 fused_ordering(864) 00:13:59.474 fused_ordering(865) 00:13:59.474 fused_ordering(866) 00:13:59.474 fused_ordering(867) 00:13:59.474 fused_ordering(868) 00:13:59.474 fused_ordering(869) 00:13:59.474 fused_ordering(870) 00:13:59.474 fused_ordering(871) 00:13:59.474 fused_ordering(872) 00:13:59.474 fused_ordering(873) 00:13:59.474 fused_ordering(874) 00:13:59.474 fused_ordering(875) 00:13:59.474 fused_ordering(876) 00:13:59.474 fused_ordering(877) 00:13:59.474 fused_ordering(878) 00:13:59.474 fused_ordering(879) 00:13:59.474 fused_ordering(880) 00:13:59.474 fused_ordering(881) 00:13:59.474 fused_ordering(882) 00:13:59.474 fused_ordering(883) 00:13:59.474 fused_ordering(884) 00:13:59.474 fused_ordering(885) 00:13:59.474 fused_ordering(886) 00:13:59.474 fused_ordering(887) 00:13:59.474 fused_ordering(888) 00:13:59.474 fused_ordering(889) 00:13:59.474 fused_ordering(890) 00:13:59.474 fused_ordering(891) 00:13:59.474 fused_ordering(892) 00:13:59.474 fused_ordering(893) 00:13:59.474 fused_ordering(894) 00:13:59.474 fused_ordering(895) 00:13:59.474 fused_ordering(896) 00:13:59.474 fused_ordering(897) 00:13:59.474 fused_ordering(898) 00:13:59.474 fused_ordering(899) 00:13:59.474 fused_ordering(900) 00:13:59.474 fused_ordering(901) 00:13:59.474 fused_ordering(902) 00:13:59.474 fused_ordering(903) 00:13:59.474 fused_ordering(904) 00:13:59.474 fused_ordering(905) 00:13:59.474 fused_ordering(906) 00:13:59.474 fused_ordering(907) 00:13:59.474 fused_ordering(908) 00:13:59.474 fused_ordering(909) 00:13:59.474 fused_ordering(910) 00:13:59.474 fused_ordering(911) 00:13:59.474 fused_ordering(912) 00:13:59.474 fused_ordering(913) 00:13:59.474 fused_ordering(914) 00:13:59.474 fused_ordering(915) 00:13:59.474 fused_ordering(916) 00:13:59.474 fused_ordering(917) 00:13:59.474 fused_ordering(918) 00:13:59.474 fused_ordering(919) 00:13:59.474 fused_ordering(920) 00:13:59.474 fused_ordering(921) 00:13:59.474 fused_ordering(922) 00:13:59.474 fused_ordering(923) 00:13:59.474 fused_ordering(924) 00:13:59.474 fused_ordering(925) 00:13:59.474 fused_ordering(926) 00:13:59.474 fused_ordering(927) 00:13:59.474 fused_ordering(928) 00:13:59.474 fused_ordering(929) 00:13:59.474 fused_ordering(930) 00:13:59.474 fused_ordering(931) 00:13:59.474 fused_ordering(932) 00:13:59.474 fused_ordering(933) 00:13:59.474 fused_ordering(934) 00:13:59.474 fused_ordering(935) 00:13:59.474 fused_ordering(936) 00:13:59.474 fused_ordering(937) 00:13:59.474 fused_ordering(938) 00:13:59.474 fused_ordering(939) 00:13:59.474 fused_ordering(940) 00:13:59.474 fused_ordering(941) 00:13:59.474 fused_ordering(942) 00:13:59.474 fused_ordering(943) 00:13:59.474 fused_ordering(944) 00:13:59.474 fused_ordering(945) 00:13:59.474 fused_ordering(946) 00:13:59.474 fused_ordering(947) 00:13:59.474 fused_ordering(948) 00:13:59.474 fused_ordering(949) 00:13:59.474 fused_ordering(950) 00:13:59.474 fused_ordering(951) 00:13:59.474 fused_ordering(952) 00:13:59.474 fused_ordering(953) 00:13:59.474 fused_ordering(954) 00:13:59.474 fused_ordering(955) 00:13:59.474 fused_ordering(956) 00:13:59.474 fused_ordering(957) 00:13:59.474 fused_ordering(958) 00:13:59.474 fused_ordering(959) 00:13:59.474 fused_ordering(960) 00:13:59.474 fused_ordering(961) 00:13:59.474 fused_ordering(962) 00:13:59.474 fused_ordering(963) 00:13:59.474 fused_ordering(964) 00:13:59.474 fused_ordering(965) 00:13:59.474 fused_ordering(966) 00:13:59.474 fused_ordering(967) 00:13:59.474 fused_ordering(968) 00:13:59.474 fused_ordering(969) 00:13:59.474 fused_ordering(970) 00:13:59.474 fused_ordering(971) 00:13:59.474 fused_ordering(972) 00:13:59.474 fused_ordering(973) 00:13:59.474 fused_ordering(974) 00:13:59.474 fused_ordering(975) 00:13:59.474 fused_ordering(976) 00:13:59.474 fused_ordering(977) 00:13:59.474 fused_ordering(978) 00:13:59.474 fused_ordering(979) 00:13:59.474 fused_ordering(980) 00:13:59.474 fused_ordering(981) 00:13:59.474 fused_ordering(982) 00:13:59.474 fused_ordering(983) 00:13:59.474 fused_ordering(984) 00:13:59.474 fused_ordering(985) 00:13:59.474 fused_ordering(986) 00:13:59.474 fused_ordering(987) 00:13:59.474 fused_ordering(988) 00:13:59.474 fused_ordering(989) 00:13:59.474 fused_ordering(990) 00:13:59.474 fused_ordering(991) 00:13:59.474 fused_ordering(992) 00:13:59.474 fused_ordering(993) 00:13:59.474 fused_ordering(994) 00:13:59.474 fused_ordering(995) 00:13:59.474 fused_ordering(996) 00:13:59.474 fused_ordering(997) 00:13:59.474 fused_ordering(998) 00:13:59.474 fused_ordering(999) 00:13:59.474 fused_ordering(1000) 00:13:59.474 fused_ordering(1001) 00:13:59.474 fused_ordering(1002) 00:13:59.474 fused_ordering(1003) 00:13:59.474 fused_ordering(1004) 00:13:59.474 fused_ordering(1005) 00:13:59.474 fused_ordering(1006) 00:13:59.474 fused_ordering(1007) 00:13:59.474 fused_ordering(1008) 00:13:59.474 fused_ordering(1009) 00:13:59.474 fused_ordering(1010) 00:13:59.474 fused_ordering(1011) 00:13:59.474 fused_ordering(1012) 00:13:59.474 fused_ordering(1013) 00:13:59.474 fused_ordering(1014) 00:13:59.474 fused_ordering(1015) 00:13:59.474 fused_ordering(1016) 00:13:59.474 fused_ordering(1017) 00:13:59.474 fused_ordering(1018) 00:13:59.474 fused_ordering(1019) 00:13:59.474 fused_ordering(1020) 00:13:59.474 fused_ordering(1021) 00:13:59.474 fused_ordering(1022) 00:13:59.474 fused_ordering(1023) 00:13:59.474 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:59.474 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:59.474 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:59.474 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:59.474 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:59.474 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:59.474 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:59.474 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:59.474 rmmod nvme_tcp 00:13:59.474 rmmod nvme_fabrics 00:13:59.474 rmmod nvme_keyring 00:13:59.474 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:59.474 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:59.474 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:59.474 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 1580367 ']' 00:13:59.474 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 1580367 00:13:59.474 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1580367 ']' 00:13:59.474 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1580367 00:13:59.474 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:59.474 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.474 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1580367 00:13:59.474 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:59.474 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:59.474 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1580367' 00:13:59.474 killing process with pid 1580367 00:13:59.474 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1580367 00:13:59.474 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1580367 00:13:59.736 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:59.736 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:59.736 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:59.736 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:59.736 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:13:59.736 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:59.736 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:13:59.736 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:59.736 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:59.736 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.736 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.736 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.651 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:01.651 00:14:01.651 real 0m13.241s 00:14:01.651 user 0m7.099s 00:14:01.651 sys 0m6.888s 00:14:01.651 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:01.651 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.651 ************************************ 00:14:01.651 END TEST nvmf_fused_ordering 00:14:01.651 ************************************ 00:14:01.912 12:18:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:01.912 12:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:01.912 12:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:01.912 12:18:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:01.912 ************************************ 00:14:01.912 START TEST nvmf_ns_masking 00:14:01.912 ************************************ 00:14:01.912 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:01.912 * Looking for test storage... 00:14:01.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.912 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:01.912 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:14:01.912 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:01.912 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:01.912 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:01.912 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:01.912 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:01.912 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:01.912 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:02.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.174 --rc genhtml_branch_coverage=1 00:14:02.174 --rc genhtml_function_coverage=1 00:14:02.174 --rc genhtml_legend=1 00:14:02.174 --rc geninfo_all_blocks=1 00:14:02.174 --rc geninfo_unexecuted_blocks=1 00:14:02.174 00:14:02.174 ' 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:02.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.174 --rc genhtml_branch_coverage=1 00:14:02.174 --rc genhtml_function_coverage=1 00:14:02.174 --rc genhtml_legend=1 00:14:02.174 --rc geninfo_all_blocks=1 00:14:02.174 --rc geninfo_unexecuted_blocks=1 00:14:02.174 00:14:02.174 ' 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:02.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.174 --rc genhtml_branch_coverage=1 00:14:02.174 --rc genhtml_function_coverage=1 00:14:02.174 --rc genhtml_legend=1 00:14:02.174 --rc geninfo_all_blocks=1 00:14:02.174 --rc geninfo_unexecuted_blocks=1 00:14:02.174 00:14:02.174 ' 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:02.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.174 --rc genhtml_branch_coverage=1 00:14:02.174 --rc genhtml_function_coverage=1 00:14:02.174 --rc genhtml_legend=1 00:14:02.174 --rc geninfo_all_blocks=1 00:14:02.174 --rc geninfo_unexecuted_blocks=1 00:14:02.174 00:14:02.174 ' 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:02.174 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:02.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=778e02aa-9510-48b3-a4eb-3066935f3675 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=61ef02ad-46a3-4f14-8a5a-4d1db0e1c88c 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=2e433052-7f9a-4055-a28e-3fe3758b83da 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:02.175 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:10.319 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:10.319 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:10.319 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:10.320 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:10.320 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:10.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:14:10.320 00:14:10.320 --- 10.0.0.2 ping statistics --- 00:14:10.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.320 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:14:10.320 00:14:10.320 --- 10.0.0.1 ping statistics --- 00:14:10.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.320 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=1585288 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 1585288 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1585288 ']' 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.320 12:18:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.320 [2024-11-04 12:18:43.976712] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:14:10.320 [2024-11-04 12:18:43.976784] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.320 [2024-11-04 12:18:44.050696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.320 [2024-11-04 12:18:44.092138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.320 [2024-11-04 12:18:44.092178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.320 [2024-11-04 12:18:44.092187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.320 [2024-11-04 12:18:44.092194] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.321 [2024-11-04 12:18:44.092200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.321 [2024-11-04 12:18:44.092842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.321 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:10.321 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:10.321 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:10.321 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:10.321 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.321 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.321 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:10.617 [2024-11-04 12:18:44.983313] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.617 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:10.617 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:10.617 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:10.617 Malloc1 00:14:10.617 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:10.876 Malloc2 00:14:10.876 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:11.136 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:11.136 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.396 [2024-11-04 12:18:45.816386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.396 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:11.396 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2e433052-7f9a-4055-a28e-3fe3758b83da -a 10.0.0.2 -s 4420 -i 4 00:14:11.657 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:11.657 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:11.657 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:11.657 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:11.657 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:13.568 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:13.568 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:13.568 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:13.568 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:13.568 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:13.568 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:13.568 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:13.568 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:13.568 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:13.568 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:13.568 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:13.568 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:13.568 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:13.568 [ 0]:0x1 00:14:13.568 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:13.568 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:13.568 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d46f3e9925d24723b71b20cbef4e6081 00:14:13.568 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d46f3e9925d24723b71b20cbef4e6081 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.568 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:13.828 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:13.828 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:13.828 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:13.828 [ 0]:0x1 00:14:13.828 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:13.828 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:13.829 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d46f3e9925d24723b71b20cbef4e6081 00:14:13.829 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d46f3e9925d24723b71b20cbef4e6081 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.829 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:13.829 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:13.829 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:13.829 [ 1]:0x2 00:14:13.829 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:13.829 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:13.829 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7eb8514a6cad433b84caef22b3c648cb 00:14:13.829 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7eb8514a6cad433b84caef22b3c648cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.829 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:13.829 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.088 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.348 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:14.608 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:14.608 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2e433052-7f9a-4055-a28e-3fe3758b83da -a 10.0.0.2 -s 4420 -i 4 00:14:14.608 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:14.608 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:14.608 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:14.608 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:14.608 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:14.608 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:17.148 [ 0]:0x2 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7eb8514a6cad433b84caef22b3c648cb 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7eb8514a6cad433b84caef22b3c648cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:17.148 [ 0]:0x1 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d46f3e9925d24723b71b20cbef4e6081 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d46f3e9925d24723b71b20cbef4e6081 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:17.148 [ 1]:0x2 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7eb8514a6cad433b84caef22b3c648cb 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7eb8514a6cad433b84caef22b3c648cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.148 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:17.409 [ 0]:0x2 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7eb8514a6cad433b84caef22b3c648cb 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7eb8514a6cad433b84caef22b3c648cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:17.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.409 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:17.670 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:17.670 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2e433052-7f9a-4055-a28e-3fe3758b83da -a 10.0.0.2 -s 4420 -i 4 00:14:17.930 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:17.930 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:17.930 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:17.930 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:17.930 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:17.930 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:19.839 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:19.839 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:19.840 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:19.840 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:19.840 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:19.840 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:19.840 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:19.840 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:19.840 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:19.840 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:19.840 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:19.840 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.840 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:19.840 [ 0]:0x1 00:14:19.840 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.840 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:19.840 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d46f3e9925d24723b71b20cbef4e6081 00:14:19.840 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d46f3e9925d24723b71b20cbef4e6081 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.840 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:19.840 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.840 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:20.100 [ 1]:0x2 00:14:20.100 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.100 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:20.100 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7eb8514a6cad433b84caef22b3c648cb 00:14:20.100 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7eb8514a6cad433b84caef22b3c648cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.100 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:20.100 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:20.100 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:20.100 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:20.100 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:20.100 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.100 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:20.100 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.100 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:20.100 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.100 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.100 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.100 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.360 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:20.360 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.360 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:20.360 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:20.360 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:20.360 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:20.360 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:20.360 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:20.360 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.360 [ 0]:0x2 00:14:20.360 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:20.360 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.360 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7eb8514a6cad433b84caef22b3c648cb 00:14:20.360 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7eb8514a6cad433b84caef22b3c648cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.360 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:20.360 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:20.360 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:20.361 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.361 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.361 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.361 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.361 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.361 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.361 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.361 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:20.361 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:20.361 [2024-11-04 12:18:54.914743] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:20.361 request: 00:14:20.361 { 00:14:20.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.361 "nsid": 2, 00:14:20.361 "host": "nqn.2016-06.io.spdk:host1", 00:14:20.361 "method": "nvmf_ns_remove_host", 00:14:20.361 "req_id": 1 00:14:20.361 } 00:14:20.361 Got JSON-RPC error response 00:14:20.361 response: 00:14:20.361 { 00:14:20.361 "code": -32602, 00:14:20.361 "message": "Invalid parameters" 00:14:20.361 } 00:14:20.621 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:20.621 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:20.621 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:20.621 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:20.621 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:20.621 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:20.621 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:20.621 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:20.621 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.621 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:20.621 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.621 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:20.621 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.621 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.621 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.621 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:20.621 [ 0]:0x2 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7eb8514a6cad433b84caef22b3c648cb 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7eb8514a6cad433b84caef22b3c648cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:20.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1587633 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1587633 /var/tmp/host.sock 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1587633 ']' 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:20.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.621 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:20.621 [2024-11-04 12:18:55.171156] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:14:20.621 [2024-11-04 12:18:55.171207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587633 ] 00:14:20.882 [2024-11-04 12:18:55.248854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.882 [2024-11-04 12:18:55.284869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.452 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.452 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:21.452 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.711 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:21.972 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 778e02aa-9510-48b3-a4eb-3066935f3675 00:14:21.972 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:21.972 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 778E02AA951048B3A4EB3066935F3675 -i 00:14:21.973 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 61ef02ad-46a3-4f14-8a5a-4d1db0e1c88c 00:14:21.973 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:21.973 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 61EF02AD46A34F148A5A4D1DB0E1C88C -i 00:14:22.233 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:22.493 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:22.493 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:22.493 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:22.753 nvme0n1 00:14:22.753 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:22.753 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:23.013 nvme1n2 00:14:23.013 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:23.013 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:23.013 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:23.013 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:23.013 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:23.273 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:23.273 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:23.273 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:23.273 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:23.533 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 778e02aa-9510-48b3-a4eb-3066935f3675 == \7\7\8\e\0\2\a\a\-\9\5\1\0\-\4\8\b\3\-\a\4\e\b\-\3\0\6\6\9\3\5\f\3\6\7\5 ]] 00:14:23.533 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:23.533 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:23.533 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:23.533 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 61ef02ad-46a3-4f14-8a5a-4d1db0e1c88c == \6\1\e\f\0\2\a\d\-\4\6\a\3\-\4\f\1\4\-\8\a\5\a\-\4\d\1\d\b\0\e\1\c\8\8\c ]] 00:14:23.534 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1587633 00:14:23.534 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1587633 ']' 00:14:23.534 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1587633 00:14:23.534 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:23.534 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:23.534 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1587633 00:14:23.794 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:23.794 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:23.794 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1587633' 00:14:23.794 killing process with pid 1587633 00:14:23.794 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1587633 00:14:23.794 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1587633 00:14:23.794 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:24.055 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:24.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:24.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:24.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:24.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:24.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:24.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:24.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:24.056 rmmod nvme_tcp 00:14:24.056 rmmod nvme_fabrics 00:14:24.056 rmmod nvme_keyring 00:14:24.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:24.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:24.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:24.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 1585288 ']' 00:14:24.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 1585288 00:14:24.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1585288 ']' 00:14:24.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1585288 00:14:24.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:24.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:24.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1585288 00:14:24.316 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:24.316 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:24.316 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1585288' 00:14:24.316 killing process with pid 1585288 00:14:24.316 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1585288 00:14:24.316 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1585288 00:14:24.316 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:24.316 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:24.316 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:24.316 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:24.316 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:14:24.316 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:24.316 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:14:24.316 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:24.316 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:24.316 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.316 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.316 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.871 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:26.871 00:14:26.871 real 0m24.558s 00:14:26.871 user 0m24.791s 00:14:26.871 sys 0m7.606s 00:14:26.871 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:26.871 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:26.871 ************************************ 00:14:26.871 END TEST nvmf_ns_masking 00:14:26.871 ************************************ 00:14:26.871 12:19:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:26.871 12:19:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:26.871 12:19:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:26.871 12:19:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:26.871 12:19:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:26.871 ************************************ 00:14:26.871 START TEST nvmf_nvme_cli 00:14:26.871 ************************************ 00:14:26.871 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:26.871 * Looking for test storage... 00:14:26.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:26.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.871 --rc genhtml_branch_coverage=1 00:14:26.871 --rc genhtml_function_coverage=1 00:14:26.871 --rc genhtml_legend=1 00:14:26.871 --rc geninfo_all_blocks=1 00:14:26.871 --rc geninfo_unexecuted_blocks=1 00:14:26.871 00:14:26.871 ' 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:26.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.871 --rc genhtml_branch_coverage=1 00:14:26.871 --rc genhtml_function_coverage=1 00:14:26.871 --rc genhtml_legend=1 00:14:26.871 --rc geninfo_all_blocks=1 00:14:26.871 --rc geninfo_unexecuted_blocks=1 00:14:26.871 00:14:26.871 ' 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:26.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.871 --rc genhtml_branch_coverage=1 00:14:26.871 --rc genhtml_function_coverage=1 00:14:26.871 --rc genhtml_legend=1 00:14:26.871 --rc geninfo_all_blocks=1 00:14:26.871 --rc geninfo_unexecuted_blocks=1 00:14:26.871 00:14:26.871 ' 00:14:26.871 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:26.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.871 --rc genhtml_branch_coverage=1 00:14:26.871 --rc genhtml_function_coverage=1 00:14:26.871 --rc genhtml_legend=1 00:14:26.871 --rc geninfo_all_blocks=1 00:14:26.871 --rc geninfo_unexecuted_blocks=1 00:14:26.871 00:14:26.871 ' 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:26.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:26.872 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:35.011 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:35.012 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:35.012 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:35.012 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:35.012 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:35.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:14:35.012 00:14:35.012 --- 10.0.0.2 ping statistics --- 00:14:35.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.012 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:35.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:14:35.012 00:14:35.012 --- 10.0.0.1 ping statistics --- 00:14:35.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.012 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=1592493 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 1592493 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1592493 ']' 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:35.012 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.012 [2024-11-04 12:19:08.446813] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:14:35.012 [2024-11-04 12:19:08.446879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.012 [2024-11-04 12:19:08.518832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.012 [2024-11-04 12:19:08.562895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.012 [2024-11-04 12:19:08.562935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.012 [2024-11-04 12:19:08.562943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.013 [2024-11-04 12:19:08.562950] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.013 [2024-11-04 12:19:08.562956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.013 [2024-11-04 12:19:08.564563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.013 [2024-11-04 12:19:08.564704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.013 [2024-11-04 12:19:08.564851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.013 [2024-11-04 12:19:08.564851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.013 [2024-11-04 12:19:09.299766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.013 Malloc0 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.013 Malloc1 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.013 [2024-11-04 12:19:09.405516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.013 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:35.273 00:14:35.273 Discovery Log Number of Records 2, Generation counter 2 00:14:35.273 =====Discovery Log Entry 0====== 00:14:35.273 trtype: tcp 00:14:35.273 adrfam: ipv4 00:14:35.273 subtype: current discovery subsystem 00:14:35.273 treq: not required 00:14:35.273 portid: 0 00:14:35.273 trsvcid: 4420 00:14:35.273 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:35.273 traddr: 10.0.0.2 00:14:35.273 eflags: explicit discovery connections, duplicate discovery information 00:14:35.273 sectype: none 00:14:35.273 =====Discovery Log Entry 1====== 00:14:35.273 trtype: tcp 00:14:35.273 adrfam: ipv4 00:14:35.273 subtype: nvme subsystem 00:14:35.273 treq: not required 00:14:35.273 portid: 0 00:14:35.273 trsvcid: 4420 00:14:35.273 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:35.273 traddr: 10.0.0.2 00:14:35.273 eflags: none 00:14:35.273 sectype: none 00:14:35.273 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:35.273 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:35.273 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:35.273 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:35.273 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:35.273 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:35.273 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:35.273 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:35.273 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:35.273 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:35.273 12:19:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:36.656 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:36.656 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:36.656 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:36.656 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:36.656 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:36.656 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:39.203 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:39.204 /dev/nvme0n2 ]] 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:39.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:39.204 rmmod nvme_tcp 00:14:39.204 rmmod nvme_fabrics 00:14:39.204 rmmod nvme_keyring 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 1592493 ']' 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 1592493 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1592493 ']' 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1592493 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1592493 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1592493' 00:14:39.204 killing process with pid 1592493 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1592493 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1592493 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:39.204 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.748 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:41.748 00:14:41.748 real 0m14.825s 00:14:41.748 user 0m22.598s 00:14:41.748 sys 0m6.091s 00:14:41.748 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:41.748 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.748 ************************************ 00:14:41.748 END TEST nvmf_nvme_cli 00:14:41.748 ************************************ 00:14:41.748 12:19:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:41.748 12:19:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:41.748 12:19:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:41.748 12:19:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:41.748 12:19:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:41.748 ************************************ 00:14:41.748 START TEST nvmf_vfio_user 00:14:41.748 ************************************ 00:14:41.748 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:41.748 * Looking for test storage... 00:14:41.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.748 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:41.748 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:14:41.748 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:41.748 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:41.748 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:41.748 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:41.748 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:41.748 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:41.748 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:41.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.749 --rc genhtml_branch_coverage=1 00:14:41.749 --rc genhtml_function_coverage=1 00:14:41.749 --rc genhtml_legend=1 00:14:41.749 --rc geninfo_all_blocks=1 00:14:41.749 --rc geninfo_unexecuted_blocks=1 00:14:41.749 00:14:41.749 ' 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:41.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.749 --rc genhtml_branch_coverage=1 00:14:41.749 --rc genhtml_function_coverage=1 00:14:41.749 --rc genhtml_legend=1 00:14:41.749 --rc geninfo_all_blocks=1 00:14:41.749 --rc geninfo_unexecuted_blocks=1 00:14:41.749 00:14:41.749 ' 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:41.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.749 --rc genhtml_branch_coverage=1 00:14:41.749 --rc genhtml_function_coverage=1 00:14:41.749 --rc genhtml_legend=1 00:14:41.749 --rc geninfo_all_blocks=1 00:14:41.749 --rc geninfo_unexecuted_blocks=1 00:14:41.749 00:14:41.749 ' 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:41.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.749 --rc genhtml_branch_coverage=1 00:14:41.749 --rc genhtml_function_coverage=1 00:14:41.749 --rc genhtml_legend=1 00:14:41.749 --rc geninfo_all_blocks=1 00:14:41.749 --rc geninfo_unexecuted_blocks=1 00:14:41.749 00:14:41.749 ' 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:41.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1594163 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1594163' 00:14:41.749 Process pid: 1594163 00:14:41.749 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:41.750 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1594163 00:14:41.750 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1594163 ']' 00:14:41.750 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:41.750 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.750 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:41.750 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.750 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:41.750 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:41.750 [2024-11-04 12:19:16.160491] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:14:41.750 [2024-11-04 12:19:16.160565] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.750 [2024-11-04 12:19:16.226247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:41.750 [2024-11-04 12:19:16.269446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.750 [2024-11-04 12:19:16.269487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.750 [2024-11-04 12:19:16.269496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.750 [2024-11-04 12:19:16.269503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.750 [2024-11-04 12:19:16.269508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.750 [2024-11-04 12:19:16.271387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.750 [2024-11-04 12:19:16.271530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.750 [2024-11-04 12:19:16.271697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.750 [2024-11-04 12:19:16.271698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:42.691 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:42.691 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:42.691 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:43.632 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:43.632 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:43.632 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:43.632 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:43.632 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:43.632 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:43.892 Malloc1 00:14:43.892 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:44.152 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:44.412 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:44.412 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:44.412 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:44.412 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:44.671 Malloc2 00:14:44.671 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:44.931 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:44.931 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:45.192 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:45.192 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:45.192 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:45.192 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:45.192 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:45.192 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:45.192 [2024-11-04 12:19:19.696702] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:14:45.192 [2024-11-04 12:19:19.696772] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594912 ] 00:14:45.192 [2024-11-04 12:19:19.730388] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:45.192 [2024-11-04 12:19:19.738413] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:45.192 [2024-11-04 12:19:19.738435] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4a2cbf0000 00:14:45.192 [2024-11-04 12:19:19.740753] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:45.192 [2024-11-04 12:19:19.741419] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:45.192 [2024-11-04 12:19:19.742427] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:45.192 [2024-11-04 12:19:19.743421] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:45.192 [2024-11-04 12:19:19.744427] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:45.192 [2024-11-04 12:19:19.745440] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:45.192 [2024-11-04 12:19:19.746438] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:45.192 [2024-11-04 12:19:19.747446] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:45.192 [2024-11-04 12:19:19.748448] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:45.192 [2024-11-04 12:19:19.748458] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4a2cbe5000 00:14:45.192 [2024-11-04 12:19:19.749788] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:45.455 [2024-11-04 12:19:19.769909] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:45.455 [2024-11-04 12:19:19.769936] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:45.455 [2024-11-04 12:19:19.772611] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:45.455 [2024-11-04 12:19:19.772653] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:45.455 [2024-11-04 12:19:19.772741] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:45.455 [2024-11-04 12:19:19.772763] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:45.455 [2024-11-04 12:19:19.772769] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:45.455 [2024-11-04 12:19:19.773604] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:45.455 [2024-11-04 12:19:19.773614] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:45.455 [2024-11-04 12:19:19.773622] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:45.455 [2024-11-04 12:19:19.774608] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:45.455 [2024-11-04 12:19:19.774617] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:45.455 [2024-11-04 12:19:19.774625] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:45.455 [2024-11-04 12:19:19.775620] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:45.455 [2024-11-04 12:19:19.775629] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:45.455 [2024-11-04 12:19:19.776626] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:45.455 [2024-11-04 12:19:19.776633] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:45.455 [2024-11-04 12:19:19.776638] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:45.455 [2024-11-04 12:19:19.776645] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:45.455 [2024-11-04 12:19:19.776754] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:45.455 [2024-11-04 12:19:19.776759] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:45.455 [2024-11-04 12:19:19.776765] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:45.455 [2024-11-04 12:19:19.777637] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:45.455 [2024-11-04 12:19:19.778631] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:45.455 [2024-11-04 12:19:19.779637] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:45.455 [2024-11-04 12:19:19.780633] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:45.455 [2024-11-04 12:19:19.780685] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:45.455 [2024-11-04 12:19:19.781643] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:45.455 [2024-11-04 12:19:19.781652] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:45.455 [2024-11-04 12:19:19.781657] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:45.455 [2024-11-04 12:19:19.781678] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:45.455 [2024-11-04 12:19:19.781686] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:45.455 [2024-11-04 12:19:19.781699] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:45.455 [2024-11-04 12:19:19.781705] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:45.455 [2024-11-04 12:19:19.781708] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:45.455 [2024-11-04 12:19:19.781721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:45.455 [2024-11-04 12:19:19.781763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:45.455 [2024-11-04 12:19:19.781772] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:45.455 [2024-11-04 12:19:19.781777] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:45.455 [2024-11-04 12:19:19.781782] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:45.455 [2024-11-04 12:19:19.781786] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:45.455 [2024-11-04 12:19:19.781791] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:45.455 [2024-11-04 12:19:19.781796] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:45.455 [2024-11-04 12:19:19.781801] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:45.455 [2024-11-04 12:19:19.781809] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:45.455 [2024-11-04 12:19:19.781821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:45.455 [2024-11-04 12:19:19.781836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:45.455 [2024-11-04 12:19:19.781850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.455 [2024-11-04 12:19:19.781859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.455 [2024-11-04 12:19:19.781868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.455 [2024-11-04 12:19:19.781877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.455 [2024-11-04 12:19:19.781882] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:45.455 [2024-11-04 12:19:19.781889] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:45.455 [2024-11-04 12:19:19.781898] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:45.455 [2024-11-04 12:19:19.781906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:45.455 [2024-11-04 12:19:19.781911] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:45.455 [2024-11-04 12:19:19.781919] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:45.455 [2024-11-04 12:19:19.781926] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:45.455 [2024-11-04 12:19:19.781932] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:45.456 [2024-11-04 12:19:19.781941] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:45.456 [2024-11-04 12:19:19.781950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:45.456 [2024-11-04 12:19:19.782013] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:45.456 [2024-11-04 12:19:19.782022] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:45.456 [2024-11-04 12:19:19.782029] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:45.456 [2024-11-04 12:19:19.782034] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:45.456 [2024-11-04 12:19:19.782037] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:45.456 [2024-11-04 12:19:19.782044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:45.456 [2024-11-04 12:19:19.782056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:45.456 [2024-11-04 12:19:19.782065] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:45.456 [2024-11-04 12:19:19.782077] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:45.456 [2024-11-04 12:19:19.782087] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:45.456 [2024-11-04 12:19:19.782094] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:45.456 [2024-11-04 12:19:19.782099] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:45.456 [2024-11-04 12:19:19.782102] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:45.456 [2024-11-04 12:19:19.782108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:45.456 [2024-11-04 12:19:19.782129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:45.456 [2024-11-04 12:19:19.782141] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:45.456 [2024-11-04 12:19:19.782149] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:45.456 [2024-11-04 12:19:19.782156] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:45.456 [2024-11-04 12:19:19.782161] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:45.456 [2024-11-04 12:19:19.782164] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:45.456 [2024-11-04 12:19:19.782170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:45.456 [2024-11-04 12:19:19.782178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:45.456 [2024-11-04 12:19:19.782186] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:45.456 [2024-11-04 12:19:19.782193] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:45.456 [2024-11-04 12:19:19.782200] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:45.456 [2024-11-04 12:19:19.782206] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:45.456 [2024-11-04 12:19:19.782211] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:45.456 [2024-11-04 12:19:19.782217] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:45.456 [2024-11-04 12:19:19.782222] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:45.456 [2024-11-04 12:19:19.782227] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:45.456 [2024-11-04 12:19:19.782232] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:45.456 [2024-11-04 12:19:19.782250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:45.456 [2024-11-04 12:19:19.782258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:45.456 [2024-11-04 12:19:19.782270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:45.456 [2024-11-04 12:19:19.782282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:45.456 [2024-11-04 12:19:19.782297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:45.456 [2024-11-04 12:19:19.782310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:45.456 [2024-11-04 12:19:19.782321] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:45.456 [2024-11-04 12:19:19.782329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:45.456 [2024-11-04 12:19:19.782342] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:45.456 [2024-11-04 12:19:19.782347] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:45.456 [2024-11-04 12:19:19.782350] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:45.456 [2024-11-04 12:19:19.782354] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:45.456 [2024-11-04 12:19:19.782357] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:45.456 [2024-11-04 12:19:19.782364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:45.456 [2024-11-04 12:19:19.782372] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:45.456 [2024-11-04 12:19:19.782376] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:45.456 [2024-11-04 12:19:19.782380] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:45.456 [2024-11-04 12:19:19.782386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:45.456 [2024-11-04 12:19:19.782393] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:45.456 [2024-11-04 12:19:19.782398] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:45.456 [2024-11-04 12:19:19.782401] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:45.456 [2024-11-04 12:19:19.782407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:45.456 [2024-11-04 12:19:19.782417] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:45.456 [2024-11-04 12:19:19.782422] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:45.456 [2024-11-04 12:19:19.782425] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:45.456 [2024-11-04 12:19:19.782431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:45.456 [2024-11-04 12:19:19.782439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:45.456 [2024-11-04 12:19:19.782450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:45.456 [2024-11-04 12:19:19.782461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:45.456 [2024-11-04 12:19:19.782468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:45.456 ===================================================== 00:14:45.456 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:45.456 ===================================================== 00:14:45.456 Controller Capabilities/Features 00:14:45.456 ================================ 00:14:45.456 Vendor ID: 4e58 00:14:45.456 Subsystem Vendor ID: 4e58 00:14:45.456 Serial Number: SPDK1 00:14:45.456 Model Number: SPDK bdev Controller 00:14:45.456 Firmware Version: 25.01 00:14:45.456 Recommended Arb Burst: 6 00:14:45.456 IEEE OUI Identifier: 8d 6b 50 00:14:45.456 Multi-path I/O 00:14:45.456 May have multiple subsystem ports: Yes 00:14:45.456 May have multiple controllers: Yes 00:14:45.456 Associated with SR-IOV VF: No 00:14:45.456 Max Data Transfer Size: 131072 00:14:45.456 Max Number of Namespaces: 32 00:14:45.456 Max Number of I/O Queues: 127 00:14:45.456 NVMe Specification Version (VS): 1.3 00:14:45.456 NVMe Specification Version (Identify): 1.3 00:14:45.456 Maximum Queue Entries: 256 00:14:45.456 Contiguous Queues Required: Yes 00:14:45.456 Arbitration Mechanisms Supported 00:14:45.456 Weighted Round Robin: Not Supported 00:14:45.456 Vendor Specific: Not Supported 00:14:45.456 Reset Timeout: 15000 ms 00:14:45.456 Doorbell Stride: 4 bytes 00:14:45.456 NVM Subsystem Reset: Not Supported 00:14:45.456 Command Sets Supported 00:14:45.456 NVM Command Set: Supported 00:14:45.456 Boot Partition: Not Supported 00:14:45.457 Memory Page Size Minimum: 4096 bytes 00:14:45.457 Memory Page Size Maximum: 4096 bytes 00:14:45.457 Persistent Memory Region: Not Supported 00:14:45.457 Optional Asynchronous Events Supported 00:14:45.457 Namespace Attribute Notices: Supported 00:14:45.457 Firmware Activation Notices: Not Supported 00:14:45.457 ANA Change Notices: Not Supported 00:14:45.457 PLE Aggregate Log Change Notices: Not Supported 00:14:45.457 LBA Status Info Alert Notices: Not Supported 00:14:45.457 EGE Aggregate Log Change Notices: Not Supported 00:14:45.457 Normal NVM Subsystem Shutdown event: Not Supported 00:14:45.457 Zone Descriptor Change Notices: Not Supported 00:14:45.457 Discovery Log Change Notices: Not Supported 00:14:45.457 Controller Attributes 00:14:45.457 128-bit Host Identifier: Supported 00:14:45.457 Non-Operational Permissive Mode: Not Supported 00:14:45.457 NVM Sets: Not Supported 00:14:45.457 Read Recovery Levels: Not Supported 00:14:45.457 Endurance Groups: Not Supported 00:14:45.457 Predictable Latency Mode: Not Supported 00:14:45.457 Traffic Based Keep ALive: Not Supported 00:14:45.457 Namespace Granularity: Not Supported 00:14:45.457 SQ Associations: Not Supported 00:14:45.457 UUID List: Not Supported 00:14:45.457 Multi-Domain Subsystem: Not Supported 00:14:45.457 Fixed Capacity Management: Not Supported 00:14:45.457 Variable Capacity Management: Not Supported 00:14:45.457 Delete Endurance Group: Not Supported 00:14:45.457 Delete NVM Set: Not Supported 00:14:45.457 Extended LBA Formats Supported: Not Supported 00:14:45.457 Flexible Data Placement Supported: Not Supported 00:14:45.457 00:14:45.457 Controller Memory Buffer Support 00:14:45.457 ================================ 00:14:45.457 Supported: No 00:14:45.457 00:14:45.457 Persistent Memory Region Support 00:14:45.457 ================================ 00:14:45.457 Supported: No 00:14:45.457 00:14:45.457 Admin Command Set Attributes 00:14:45.457 ============================ 00:14:45.457 Security Send/Receive: Not Supported 00:14:45.457 Format NVM: Not Supported 00:14:45.457 Firmware Activate/Download: Not Supported 00:14:45.457 Namespace Management: Not Supported 00:14:45.457 Device Self-Test: Not Supported 00:14:45.457 Directives: Not Supported 00:14:45.457 NVMe-MI: Not Supported 00:14:45.457 Virtualization Management: Not Supported 00:14:45.457 Doorbell Buffer Config: Not Supported 00:14:45.457 Get LBA Status Capability: Not Supported 00:14:45.457 Command & Feature Lockdown Capability: Not Supported 00:14:45.457 Abort Command Limit: 4 00:14:45.457 Async Event Request Limit: 4 00:14:45.457 Number of Firmware Slots: N/A 00:14:45.457 Firmware Slot 1 Read-Only: N/A 00:14:45.457 Firmware Activation Without Reset: N/A 00:14:45.457 Multiple Update Detection Support: N/A 00:14:45.457 Firmware Update Granularity: No Information Provided 00:14:45.457 Per-Namespace SMART Log: No 00:14:45.457 Asymmetric Namespace Access Log Page: Not Supported 00:14:45.457 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:45.457 Command Effects Log Page: Supported 00:14:45.457 Get Log Page Extended Data: Supported 00:14:45.457 Telemetry Log Pages: Not Supported 00:14:45.457 Persistent Event Log Pages: Not Supported 00:14:45.457 Supported Log Pages Log Page: May Support 00:14:45.457 Commands Supported & Effects Log Page: Not Supported 00:14:45.457 Feature Identifiers & Effects Log Page:May Support 00:14:45.457 NVMe-MI Commands & Effects Log Page: May Support 00:14:45.457 Data Area 4 for Telemetry Log: Not Supported 00:14:45.457 Error Log Page Entries Supported: 128 00:14:45.457 Keep Alive: Supported 00:14:45.457 Keep Alive Granularity: 10000 ms 00:14:45.457 00:14:45.457 NVM Command Set Attributes 00:14:45.457 ========================== 00:14:45.457 Submission Queue Entry Size 00:14:45.457 Max: 64 00:14:45.457 Min: 64 00:14:45.457 Completion Queue Entry Size 00:14:45.457 Max: 16 00:14:45.457 Min: 16 00:14:45.457 Number of Namespaces: 32 00:14:45.457 Compare Command: Supported 00:14:45.457 Write Uncorrectable Command: Not Supported 00:14:45.457 Dataset Management Command: Supported 00:14:45.457 Write Zeroes Command: Supported 00:14:45.457 Set Features Save Field: Not Supported 00:14:45.457 Reservations: Not Supported 00:14:45.457 Timestamp: Not Supported 00:14:45.457 Copy: Supported 00:14:45.457 Volatile Write Cache: Present 00:14:45.457 Atomic Write Unit (Normal): 1 00:14:45.457 Atomic Write Unit (PFail): 1 00:14:45.457 Atomic Compare & Write Unit: 1 00:14:45.457 Fused Compare & Write: Supported 00:14:45.457 Scatter-Gather List 00:14:45.457 SGL Command Set: Supported (Dword aligned) 00:14:45.457 SGL Keyed: Not Supported 00:14:45.457 SGL Bit Bucket Descriptor: Not Supported 00:14:45.457 SGL Metadata Pointer: Not Supported 00:14:45.457 Oversized SGL: Not Supported 00:14:45.457 SGL Metadata Address: Not Supported 00:14:45.457 SGL Offset: Not Supported 00:14:45.457 Transport SGL Data Block: Not Supported 00:14:45.457 Replay Protected Memory Block: Not Supported 00:14:45.457 00:14:45.457 Firmware Slot Information 00:14:45.457 ========================= 00:14:45.457 Active slot: 1 00:14:45.457 Slot 1 Firmware Revision: 25.01 00:14:45.457 00:14:45.457 00:14:45.457 Commands Supported and Effects 00:14:45.457 ============================== 00:14:45.457 Admin Commands 00:14:45.457 -------------- 00:14:45.457 Get Log Page (02h): Supported 00:14:45.457 Identify (06h): Supported 00:14:45.457 Abort (08h): Supported 00:14:45.457 Set Features (09h): Supported 00:14:45.457 Get Features (0Ah): Supported 00:14:45.457 Asynchronous Event Request (0Ch): Supported 00:14:45.457 Keep Alive (18h): Supported 00:14:45.457 I/O Commands 00:14:45.457 ------------ 00:14:45.457 Flush (00h): Supported LBA-Change 00:14:45.457 Write (01h): Supported LBA-Change 00:14:45.457 Read (02h): Supported 00:14:45.457 Compare (05h): Supported 00:14:45.457 Write Zeroes (08h): Supported LBA-Change 00:14:45.457 Dataset Management (09h): Supported LBA-Change 00:14:45.457 Copy (19h): Supported LBA-Change 00:14:45.457 00:14:45.457 Error Log 00:14:45.457 ========= 00:14:45.457 00:14:45.457 Arbitration 00:14:45.457 =========== 00:14:45.457 Arbitration Burst: 1 00:14:45.457 00:14:45.457 Power Management 00:14:45.457 ================ 00:14:45.457 Number of Power States: 1 00:14:45.457 Current Power State: Power State #0 00:14:45.457 Power State #0: 00:14:45.457 Max Power: 0.00 W 00:14:45.457 Non-Operational State: Operational 00:14:45.457 Entry Latency: Not Reported 00:14:45.457 Exit Latency: Not Reported 00:14:45.457 Relative Read Throughput: 0 00:14:45.457 Relative Read Latency: 0 00:14:45.457 Relative Write Throughput: 0 00:14:45.457 Relative Write Latency: 0 00:14:45.457 Idle Power: Not Reported 00:14:45.457 Active Power: Not Reported 00:14:45.457 Non-Operational Permissive Mode: Not Supported 00:14:45.457 00:14:45.457 Health Information 00:14:45.457 ================== 00:14:45.457 Critical Warnings: 00:14:45.457 Available Spare Space: OK 00:14:45.457 Temperature: OK 00:14:45.457 Device Reliability: OK 00:14:45.457 Read Only: No 00:14:45.457 Volatile Memory Backup: OK 00:14:45.457 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:45.457 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:45.457 Available Spare: 0% 00:14:45.457 Available Sp[2024-11-04 12:19:19.782565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:45.457 [2024-11-04 12:19:19.782573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:45.457 [2024-11-04 12:19:19.782604] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:45.457 [2024-11-04 12:19:19.782614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.457 [2024-11-04 12:19:19.782620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.457 [2024-11-04 12:19:19.782627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.457 [2024-11-04 12:19:19.782633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.457 [2024-11-04 12:19:19.785753] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:45.458 [2024-11-04 12:19:19.785764] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:45.458 [2024-11-04 12:19:19.786661] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:45.458 [2024-11-04 12:19:19.786702] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:45.458 [2024-11-04 12:19:19.786709] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:45.458 [2024-11-04 12:19:19.787675] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:45.458 [2024-11-04 12:19:19.787686] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:45.458 [2024-11-04 12:19:19.787750] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:45.458 [2024-11-04 12:19:19.789700] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:45.458 are Threshold: 0% 00:14:45.458 Life Percentage Used: 0% 00:14:45.458 Data Units Read: 0 00:14:45.458 Data Units Written: 0 00:14:45.458 Host Read Commands: 0 00:14:45.458 Host Write Commands: 0 00:14:45.458 Controller Busy Time: 0 minutes 00:14:45.458 Power Cycles: 0 00:14:45.458 Power On Hours: 0 hours 00:14:45.458 Unsafe Shutdowns: 0 00:14:45.458 Unrecoverable Media Errors: 0 00:14:45.458 Lifetime Error Log Entries: 0 00:14:45.458 Warning Temperature Time: 0 minutes 00:14:45.458 Critical Temperature Time: 0 minutes 00:14:45.458 00:14:45.458 Number of Queues 00:14:45.458 ================ 00:14:45.458 Number of I/O Submission Queues: 127 00:14:45.458 Number of I/O Completion Queues: 127 00:14:45.458 00:14:45.458 Active Namespaces 00:14:45.458 ================= 00:14:45.458 Namespace ID:1 00:14:45.458 Error Recovery Timeout: Unlimited 00:14:45.458 Command Set Identifier: NVM (00h) 00:14:45.458 Deallocate: Supported 00:14:45.458 Deallocated/Unwritten Error: Not Supported 00:14:45.458 Deallocated Read Value: Unknown 00:14:45.458 Deallocate in Write Zeroes: Not Supported 00:14:45.458 Deallocated Guard Field: 0xFFFF 00:14:45.458 Flush: Supported 00:14:45.458 Reservation: Supported 00:14:45.458 Namespace Sharing Capabilities: Multiple Controllers 00:14:45.458 Size (in LBAs): 131072 (0GiB) 00:14:45.458 Capacity (in LBAs): 131072 (0GiB) 00:14:45.458 Utilization (in LBAs): 131072 (0GiB) 00:14:45.458 NGUID: C89194892734416B890EF85E691F0919 00:14:45.458 UUID: c8919489-2734-416b-890e-f85e691f0919 00:14:45.458 Thin Provisioning: Not Supported 00:14:45.458 Per-NS Atomic Units: Yes 00:14:45.458 Atomic Boundary Size (Normal): 0 00:14:45.458 Atomic Boundary Size (PFail): 0 00:14:45.458 Atomic Boundary Offset: 0 00:14:45.458 Maximum Single Source Range Length: 65535 00:14:45.458 Maximum Copy Length: 65535 00:14:45.458 Maximum Source Range Count: 1 00:14:45.458 NGUID/EUI64 Never Reused: No 00:14:45.458 Namespace Write Protected: No 00:14:45.458 Number of LBA Formats: 1 00:14:45.458 Current LBA Format: LBA Format #00 00:14:45.458 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:45.458 00:14:45.458 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:45.458 [2024-11-04 12:19:19.982397] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:50.747 Initializing NVMe Controllers 00:14:50.747 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:50.747 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:50.747 Initialization complete. Launching workers. 00:14:50.747 ======================================================== 00:14:50.747 Latency(us) 00:14:50.747 Device Information : IOPS MiB/s Average min max 00:14:50.747 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39942.78 156.03 3204.24 848.71 6929.00 00:14:50.747 ======================================================== 00:14:50.747 Total : 39942.78 156.03 3204.24 848.71 6929.00 00:14:50.747 00:14:50.747 [2024-11-04 12:19:24.999162] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:50.747 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:50.747 [2024-11-04 12:19:25.184025] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:56.032 Initializing NVMe Controllers 00:14:56.032 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:56.032 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:56.032 Initialization complete. Launching workers. 00:14:56.032 ======================================================== 00:14:56.032 Latency(us) 00:14:56.032 Device Information : IOPS MiB/s Average min max 00:14:56.032 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16045.00 62.68 7986.98 5444.07 11973.17 00:14:56.032 ======================================================== 00:14:56.032 Total : 16045.00 62.68 7986.98 5444.07 11973.17 00:14:56.032 00:14:56.032 [2024-11-04 12:19:30.225814] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:56.032 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:56.032 [2024-11-04 12:19:30.417696] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:01.488 [2024-11-04 12:19:35.504983] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:01.488 Initializing NVMe Controllers 00:15:01.488 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:01.488 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:01.488 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:01.488 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:01.488 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:01.488 Initialization complete. Launching workers. 00:15:01.488 Starting thread on core 2 00:15:01.488 Starting thread on core 3 00:15:01.488 Starting thread on core 1 00:15:01.488 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:01.488 [2024-11-04 12:19:35.766843] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:04.785 [2024-11-04 12:19:38.833516] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:04.785 Initializing NVMe Controllers 00:15:04.785 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:04.785 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:04.785 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:04.785 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:04.785 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:04.785 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:04.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:04.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:04.786 Initialization complete. Launching workers. 00:15:04.786 Starting thread on core 1 with urgent priority queue 00:15:04.786 Starting thread on core 2 with urgent priority queue 00:15:04.786 Starting thread on core 3 with urgent priority queue 00:15:04.786 Starting thread on core 0 with urgent priority queue 00:15:04.786 SPDK bdev Controller (SPDK1 ) core 0: 12084.00 IO/s 8.28 secs/100000 ios 00:15:04.786 SPDK bdev Controller (SPDK1 ) core 1: 11214.67 IO/s 8.92 secs/100000 ios 00:15:04.786 SPDK bdev Controller (SPDK1 ) core 2: 9116.67 IO/s 10.97 secs/100000 ios 00:15:04.786 SPDK bdev Controller (SPDK1 ) core 3: 11086.67 IO/s 9.02 secs/100000 ios 00:15:04.786 ======================================================== 00:15:04.786 00:15:04.786 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:04.786 [2024-11-04 12:19:39.100237] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:04.786 Initializing NVMe Controllers 00:15:04.786 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:04.786 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:04.786 Namespace ID: 1 size: 0GB 00:15:04.786 Initialization complete. 00:15:04.786 INFO: using host memory buffer for IO 00:15:04.786 Hello world! 00:15:04.786 [2024-11-04 12:19:39.133436] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:04.786 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:05.046 [2024-11-04 12:19:39.398193] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:05.988 Initializing NVMe Controllers 00:15:05.988 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:05.988 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:05.988 Initialization complete. Launching workers. 00:15:05.988 submit (in ns) avg, min, max = 9539.3, 3905.0, 4995968.3 00:15:05.988 complete (in ns) avg, min, max = 16612.7, 2398.3, 4996072.5 00:15:05.988 00:15:05.988 Submit histogram 00:15:05.988 ================ 00:15:05.988 Range in us Cumulative Count 00:15:05.988 3.893 - 3.920: 0.1212% ( 23) 00:15:05.988 3.920 - 3.947: 1.4918% ( 260) 00:15:05.988 3.947 - 3.973: 7.9068% ( 1217) 00:15:05.988 3.973 - 4.000: 17.1051% ( 1745) 00:15:05.988 4.000 - 4.027: 29.0074% ( 2258) 00:15:05.988 4.027 - 4.053: 40.6410% ( 2207) 00:15:05.988 4.053 - 4.080: 52.1849% ( 2190) 00:15:05.988 4.080 - 4.107: 67.4978% ( 2905) 00:15:05.988 4.107 - 4.133: 82.1781% ( 2785) 00:15:05.988 4.133 - 4.160: 91.7927% ( 1824) 00:15:05.988 4.160 - 4.187: 96.7213% ( 935) 00:15:05.988 4.187 - 4.213: 98.5241% ( 342) 00:15:05.988 4.213 - 4.240: 99.1724% ( 123) 00:15:05.988 4.240 - 4.267: 99.3938% ( 42) 00:15:05.988 4.267 - 4.293: 99.4360% ( 8) 00:15:05.988 4.293 - 4.320: 99.4413% ( 1) 00:15:05.988 4.400 - 4.427: 99.4465% ( 1) 00:15:05.988 4.533 - 4.560: 99.4518% ( 1) 00:15:05.988 4.693 - 4.720: 99.4571% ( 1) 00:15:05.988 4.720 - 4.747: 99.4623% ( 1) 00:15:05.989 4.773 - 4.800: 99.4676% ( 1) 00:15:05.989 4.827 - 4.853: 99.4782% ( 2) 00:15:05.989 4.853 - 4.880: 99.4887% ( 2) 00:15:05.989 5.573 - 5.600: 99.4940% ( 1) 00:15:05.989 5.840 - 5.867: 99.4992% ( 1) 00:15:05.989 5.867 - 5.893: 99.5098% ( 2) 00:15:05.989 5.947 - 5.973: 99.5203% ( 2) 00:15:05.989 5.973 - 6.000: 99.5256% ( 1) 00:15:05.989 6.053 - 6.080: 99.5309% ( 1) 00:15:05.989 6.107 - 6.133: 99.5361% ( 1) 00:15:05.989 6.133 - 6.160: 99.5414% ( 1) 00:15:05.989 6.160 - 6.187: 99.5519% ( 2) 00:15:05.989 6.187 - 6.213: 99.5625% ( 2) 00:15:05.989 6.213 - 6.240: 99.5678% ( 1) 00:15:05.989 6.240 - 6.267: 99.5730% ( 1) 00:15:05.989 6.267 - 6.293: 99.5783% ( 1) 00:15:05.989 6.320 - 6.347: 99.5836% ( 1) 00:15:05.989 6.347 - 6.373: 99.5941% ( 2) 00:15:05.989 6.560 - 6.587: 99.5994% ( 1) 00:15:05.989 6.587 - 6.613: 99.6047% ( 1) 00:15:05.989 6.613 - 6.640: 99.6152% ( 2) 00:15:05.989 6.667 - 6.693: 99.6205% ( 1) 00:15:05.989 6.720 - 6.747: 99.6257% ( 1) 00:15:05.989 6.747 - 6.773: 99.6310% ( 1) 00:15:05.989 6.773 - 6.800: 99.6363% ( 1) 00:15:05.989 6.800 - 6.827: 99.6416% ( 1) 00:15:05.989 6.827 - 6.880: 99.6468% ( 1) 00:15:05.989 6.933 - 6.987: 99.6574% ( 2) 00:15:05.989 6.987 - 7.040: 99.6626% ( 1) 00:15:05.989 7.040 - 7.093: 99.6785% ( 3) 00:15:05.989 7.093 - 7.147: 99.7048% ( 5) 00:15:05.989 7.147 - 7.200: 99.7101% ( 1) 00:15:05.989 7.200 - 7.253: 99.7206% ( 2) 00:15:05.989 7.253 - 7.307: 99.7364% ( 3) 00:15:05.989 7.360 - 7.413: 99.7417% ( 1) 00:15:05.989 7.413 - 7.467: 99.7575% ( 3) 00:15:05.989 7.467 - 7.520: 99.7786% ( 4) 00:15:05.989 7.520 - 7.573: 99.7892% ( 2) 00:15:05.989 7.573 - 7.627: 99.7997% ( 2) 00:15:05.989 7.680 - 7.733: 99.8155% ( 3) 00:15:05.989 7.733 - 7.787: 99.8208% ( 1) 00:15:05.989 7.840 - 7.893: 99.8261% ( 1) 00:15:05.989 8.107 - 8.160: 99.8366% ( 2) 00:15:05.989 8.267 - 8.320: 99.8419% ( 1) 00:15:05.989 8.427 - 8.480: 99.8471% ( 1) 00:15:05.989 8.640 - 8.693: 99.8524% ( 1) 00:15:05.989 14.933 - 15.040: 99.8577% ( 1) 00:15:05.989 16.960 - 17.067: 99.8629% ( 1) 00:15:05.989 2662.400 - 2676.053: 99.8682% ( 1) 00:15:05.989 3986.773 - 4014.080: 99.9947% ( 24) 00:15:05.989 4969.813 - 4997.120: 100.0000% ( 1) 00:15:05.989 00:15:05.989 Complete histogram 00:15:05.989 ================== 00:15:05.989 Range in us Cumulative Count 00:15:05.989 2.387 - 2.400: 0.0053% ( 1) 00:15:05.989 2.400 - 2.413: 0.4902% ( 92) 00:15:05.989 2.413 - 2.427: 0.5640% ( 14) 00:15:05.989 2.427 - 2.440: 0.7116% ( 28) 00:15:05.989 2.440 - 2.453: 0.7907% ( 15) 00:15:05.989 2.453 - 2.467: 30.1513% ( 5570) 00:15:05.989 2.467 - [2024-11-04 12:19:40.420652] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:05.989 2.480: 46.8136% ( 3161) 00:15:05.989 2.480 - 2.493: 63.9292% ( 3247) 00:15:05.989 2.493 - 2.507: 75.3993% ( 2176) 00:15:05.989 2.507 - 2.520: 79.6268% ( 802) 00:15:05.989 2.520 - 2.533: 82.6683% ( 577) 00:15:05.989 2.533 - 2.547: 88.1134% ( 1033) 00:15:05.989 2.547 - 2.560: 92.8839% ( 905) 00:15:05.989 2.560 - 2.573: 96.1626% ( 622) 00:15:05.989 2.573 - 2.587: 98.2763% ( 401) 00:15:05.989 2.587 - 2.600: 99.0617% ( 149) 00:15:05.989 2.600 - 2.613: 99.3358% ( 52) 00:15:05.989 2.613 - 2.627: 99.4254% ( 17) 00:15:05.989 2.627 - 2.640: 99.4307% ( 1) 00:15:05.989 2.640 - 2.653: 99.4360% ( 1) 00:15:05.989 2.653 - 2.667: 99.4413% ( 1) 00:15:05.989 2.667 - 2.680: 99.4465% ( 1) 00:15:05.989 2.720 - 2.733: 99.4518% ( 1) 00:15:05.989 2.773 - 2.787: 99.4571% ( 1) 00:15:05.989 2.827 - 2.840: 99.4623% ( 1) 00:15:05.989 2.987 - 3.000: 99.4676% ( 1) 00:15:05.989 3.053 - 3.067: 99.4729% ( 1) 00:15:05.989 4.320 - 4.347: 99.4834% ( 2) 00:15:05.989 4.427 - 4.453: 99.4940% ( 2) 00:15:05.989 4.453 - 4.480: 99.4992% ( 1) 00:15:05.989 4.560 - 4.587: 99.5045% ( 1) 00:15:05.989 4.720 - 4.747: 99.5098% ( 1) 00:15:05.989 4.747 - 4.773: 99.5150% ( 1) 00:15:05.989 4.827 - 4.853: 99.5203% ( 1) 00:15:05.989 4.853 - 4.880: 99.5256% ( 1) 00:15:05.989 4.907 - 4.933: 99.5309% ( 1) 00:15:05.989 4.933 - 4.960: 99.5361% ( 1) 00:15:05.989 5.013 - 5.040: 99.5414% ( 1) 00:15:05.989 5.093 - 5.120: 99.5467% ( 1) 00:15:05.989 5.200 - 5.227: 99.5519% ( 1) 00:15:05.989 5.253 - 5.280: 99.5572% ( 1) 00:15:05.989 5.307 - 5.333: 99.5625% ( 1) 00:15:05.989 5.547 - 5.573: 99.5678% ( 1) 00:15:05.989 5.573 - 5.600: 99.5783% ( 2) 00:15:05.989 5.653 - 5.680: 99.5836% ( 1) 00:15:05.989 5.680 - 5.707: 99.5888% ( 1) 00:15:05.989 5.733 - 5.760: 99.5941% ( 1) 00:15:05.989 5.787 - 5.813: 99.5994% ( 1) 00:15:05.989 5.973 - 6.000: 99.6047% ( 1) 00:15:05.989 6.000 - 6.027: 99.6099% ( 1) 00:15:05.989 6.107 - 6.133: 99.6152% ( 1) 00:15:05.989 6.240 - 6.267: 99.6205% ( 1) 00:15:05.989 6.587 - 6.613: 99.6257% ( 1) 00:15:05.989 8.587 - 8.640: 99.6310% ( 1) 00:15:05.989 13.067 - 13.120: 99.6363% ( 1) 00:15:05.989 13.760 - 13.867: 99.6416% ( 1) 00:15:05.989 14.187 - 14.293: 99.6468% ( 1) 00:15:05.989 3099.307 - 3112.960: 99.6521% ( 1) 00:15:05.989 3986.773 - 4014.080: 99.9947% ( 65) 00:15:05.989 4969.813 - 4997.120: 100.0000% ( 1) 00:15:05.989 00:15:05.989 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:05.989 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:05.989 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:05.989 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:05.989 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:06.251 [ 00:15:06.251 { 00:15:06.251 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:06.251 "subtype": "Discovery", 00:15:06.251 "listen_addresses": [], 00:15:06.251 "allow_any_host": true, 00:15:06.251 "hosts": [] 00:15:06.251 }, 00:15:06.251 { 00:15:06.251 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:06.251 "subtype": "NVMe", 00:15:06.251 "listen_addresses": [ 00:15:06.251 { 00:15:06.251 "trtype": "VFIOUSER", 00:15:06.251 "adrfam": "IPv4", 00:15:06.251 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:06.251 "trsvcid": "0" 00:15:06.251 } 00:15:06.251 ], 00:15:06.251 "allow_any_host": true, 00:15:06.251 "hosts": [], 00:15:06.251 "serial_number": "SPDK1", 00:15:06.251 "model_number": "SPDK bdev Controller", 00:15:06.251 "max_namespaces": 32, 00:15:06.251 "min_cntlid": 1, 00:15:06.251 "max_cntlid": 65519, 00:15:06.251 "namespaces": [ 00:15:06.251 { 00:15:06.251 "nsid": 1, 00:15:06.251 "bdev_name": "Malloc1", 00:15:06.251 "name": "Malloc1", 00:15:06.251 "nguid": "C89194892734416B890EF85E691F0919", 00:15:06.251 "uuid": "c8919489-2734-416b-890e-f85e691f0919" 00:15:06.251 } 00:15:06.251 ] 00:15:06.251 }, 00:15:06.251 { 00:15:06.251 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:06.251 "subtype": "NVMe", 00:15:06.251 "listen_addresses": [ 00:15:06.251 { 00:15:06.251 "trtype": "VFIOUSER", 00:15:06.251 "adrfam": "IPv4", 00:15:06.251 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:06.251 "trsvcid": "0" 00:15:06.251 } 00:15:06.251 ], 00:15:06.251 "allow_any_host": true, 00:15:06.251 "hosts": [], 00:15:06.251 "serial_number": "SPDK2", 00:15:06.251 "model_number": "SPDK bdev Controller", 00:15:06.251 "max_namespaces": 32, 00:15:06.251 "min_cntlid": 1, 00:15:06.251 "max_cntlid": 65519, 00:15:06.251 "namespaces": [ 00:15:06.251 { 00:15:06.251 "nsid": 1, 00:15:06.251 "bdev_name": "Malloc2", 00:15:06.251 "name": "Malloc2", 00:15:06.251 "nguid": "A208F738FC724CC1B94CBC03713F2540", 00:15:06.251 "uuid": "a208f738-fc72-4cc1-b94c-bc03713f2540" 00:15:06.251 } 00:15:06.251 ] 00:15:06.251 } 00:15:06.251 ] 00:15:06.251 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:06.251 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:06.251 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1599041 00:15:06.251 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:06.251 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:06.251 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:06.251 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:06.251 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:06.251 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:06.251 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:06.251 [2024-11-04 12:19:40.813525] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:06.511 Malloc3 00:15:06.511 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:06.511 [2024-11-04 12:19:41.016946] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:06.511 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:06.511 Asynchronous Event Request test 00:15:06.511 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:06.511 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:06.511 Registering asynchronous event callbacks... 00:15:06.511 Starting namespace attribute notice tests for all controllers... 00:15:06.511 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:06.512 aer_cb - Changed Namespace 00:15:06.512 Cleaning up... 00:15:06.773 [ 00:15:06.773 { 00:15:06.773 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:06.773 "subtype": "Discovery", 00:15:06.773 "listen_addresses": [], 00:15:06.773 "allow_any_host": true, 00:15:06.773 "hosts": [] 00:15:06.773 }, 00:15:06.773 { 00:15:06.773 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:06.773 "subtype": "NVMe", 00:15:06.773 "listen_addresses": [ 00:15:06.773 { 00:15:06.773 "trtype": "VFIOUSER", 00:15:06.773 "adrfam": "IPv4", 00:15:06.773 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:06.773 "trsvcid": "0" 00:15:06.773 } 00:15:06.773 ], 00:15:06.773 "allow_any_host": true, 00:15:06.773 "hosts": [], 00:15:06.773 "serial_number": "SPDK1", 00:15:06.773 "model_number": "SPDK bdev Controller", 00:15:06.774 "max_namespaces": 32, 00:15:06.774 "min_cntlid": 1, 00:15:06.774 "max_cntlid": 65519, 00:15:06.774 "namespaces": [ 00:15:06.774 { 00:15:06.774 "nsid": 1, 00:15:06.774 "bdev_name": "Malloc1", 00:15:06.774 "name": "Malloc1", 00:15:06.774 "nguid": "C89194892734416B890EF85E691F0919", 00:15:06.774 "uuid": "c8919489-2734-416b-890e-f85e691f0919" 00:15:06.774 }, 00:15:06.774 { 00:15:06.774 "nsid": 2, 00:15:06.774 "bdev_name": "Malloc3", 00:15:06.774 "name": "Malloc3", 00:15:06.774 "nguid": "BB0BF048B3854199A0EDFFAAF7C6FD03", 00:15:06.774 "uuid": "bb0bf048-b385-4199-a0ed-ffaaf7c6fd03" 00:15:06.774 } 00:15:06.774 ] 00:15:06.774 }, 00:15:06.774 { 00:15:06.774 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:06.774 "subtype": "NVMe", 00:15:06.774 "listen_addresses": [ 00:15:06.774 { 00:15:06.774 "trtype": "VFIOUSER", 00:15:06.774 "adrfam": "IPv4", 00:15:06.774 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:06.774 "trsvcid": "0" 00:15:06.774 } 00:15:06.774 ], 00:15:06.774 "allow_any_host": true, 00:15:06.774 "hosts": [], 00:15:06.774 "serial_number": "SPDK2", 00:15:06.774 "model_number": "SPDK bdev Controller", 00:15:06.774 "max_namespaces": 32, 00:15:06.774 "min_cntlid": 1, 00:15:06.774 "max_cntlid": 65519, 00:15:06.774 "namespaces": [ 00:15:06.774 { 00:15:06.774 "nsid": 1, 00:15:06.774 "bdev_name": "Malloc2", 00:15:06.774 "name": "Malloc2", 00:15:06.774 "nguid": "A208F738FC724CC1B94CBC03713F2540", 00:15:06.774 "uuid": "a208f738-fc72-4cc1-b94c-bc03713f2540" 00:15:06.774 } 00:15:06.774 ] 00:15:06.774 } 00:15:06.774 ] 00:15:06.774 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1599041 00:15:06.774 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:06.774 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:06.774 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:06.774 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:06.774 [2024-11-04 12:19:41.239052] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:15:06.774 [2024-11-04 12:19:41.239074] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1599051 ] 00:15:06.774 [2024-11-04 12:19:41.268834] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:06.774 [2024-11-04 12:19:41.271068] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:06.774 [2024-11-04 12:19:41.271093] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3e84baf000 00:15:06.774 [2024-11-04 12:19:41.272069] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:06.774 [2024-11-04 12:19:41.273079] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:06.774 [2024-11-04 12:19:41.274082] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:06.774 [2024-11-04 12:19:41.275086] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:06.774 [2024-11-04 12:19:41.276094] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:06.774 [2024-11-04 12:19:41.277105] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:06.774 [2024-11-04 12:19:41.278109] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:06.774 [2024-11-04 12:19:41.279112] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:06.774 [2024-11-04 12:19:41.280121] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:06.774 [2024-11-04 12:19:41.280131] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3e84ba4000 00:15:06.774 [2024-11-04 12:19:41.281456] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:06.774 [2024-11-04 12:19:41.300899] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:06.774 [2024-11-04 12:19:41.300929] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:06.774 [2024-11-04 12:19:41.305994] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:06.774 [2024-11-04 12:19:41.306037] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:06.774 [2024-11-04 12:19:41.306122] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:06.774 [2024-11-04 12:19:41.306141] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:06.774 [2024-11-04 12:19:41.306147] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:06.774 [2024-11-04 12:19:41.307000] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:06.774 [2024-11-04 12:19:41.307010] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:06.774 [2024-11-04 12:19:41.307018] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:06.774 [2024-11-04 12:19:41.308006] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:06.774 [2024-11-04 12:19:41.308014] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:06.774 [2024-11-04 12:19:41.308022] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:06.774 [2024-11-04 12:19:41.309011] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:06.774 [2024-11-04 12:19:41.309020] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:06.774 [2024-11-04 12:19:41.310015] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:06.774 [2024-11-04 12:19:41.310025] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:06.774 [2024-11-04 12:19:41.310030] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:06.774 [2024-11-04 12:19:41.310037] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:06.774 [2024-11-04 12:19:41.310143] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:06.774 [2024-11-04 12:19:41.310148] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:06.774 [2024-11-04 12:19:41.310153] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:06.774 [2024-11-04 12:19:41.311020] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:06.774 [2024-11-04 12:19:41.312023] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:06.774 [2024-11-04 12:19:41.313027] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:06.774 [2024-11-04 12:19:41.314025] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:06.774 [2024-11-04 12:19:41.314065] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:06.774 [2024-11-04 12:19:41.315032] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:06.774 [2024-11-04 12:19:41.315041] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:06.774 [2024-11-04 12:19:41.315046] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:06.774 [2024-11-04 12:19:41.315070] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:06.774 [2024-11-04 12:19:41.315077] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:06.775 [2024-11-04 12:19:41.315090] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:06.775 [2024-11-04 12:19:41.315095] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:06.775 [2024-11-04 12:19:41.315099] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:06.775 [2024-11-04 12:19:41.315110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:06.775 [2024-11-04 12:19:41.321755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:06.775 [2024-11-04 12:19:41.321766] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:06.775 [2024-11-04 12:19:41.321771] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:06.775 [2024-11-04 12:19:41.321775] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:06.775 [2024-11-04 12:19:41.321780] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:06.775 [2024-11-04 12:19:41.321785] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:06.775 [2024-11-04 12:19:41.321790] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:06.775 [2024-11-04 12:19:41.321794] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:06.775 [2024-11-04 12:19:41.321802] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:06.775 [2024-11-04 12:19:41.321812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:06.775 [2024-11-04 12:19:41.329750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:06.775 [2024-11-04 12:19:41.329766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.775 [2024-11-04 12:19:41.329775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.775 [2024-11-04 12:19:41.329784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.775 [2024-11-04 12:19:41.329792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.775 [2024-11-04 12:19:41.329797] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:06.775 [2024-11-04 12:19:41.329804] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:06.775 [2024-11-04 12:19:41.329813] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:06.775 [2024-11-04 12:19:41.337751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:06.775 [2024-11-04 12:19:41.337759] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:06.775 [2024-11-04 12:19:41.337769] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:06.775 [2024-11-04 12:19:41.337776] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:06.775 [2024-11-04 12:19:41.337782] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:06.775 [2024-11-04 12:19:41.337791] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:07.037 [2024-11-04 12:19:41.345761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:07.037 [2024-11-04 12:19:41.345827] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:07.037 [2024-11-04 12:19:41.345836] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:07.037 [2024-11-04 12:19:41.345844] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:07.037 [2024-11-04 12:19:41.345849] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:07.037 [2024-11-04 12:19:41.345852] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.037 [2024-11-04 12:19:41.345859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:07.037 [2024-11-04 12:19:41.353751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:07.037 [2024-11-04 12:19:41.353763] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:07.037 [2024-11-04 12:19:41.353776] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:07.037 [2024-11-04 12:19:41.353784] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:07.037 [2024-11-04 12:19:41.353791] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:07.037 [2024-11-04 12:19:41.353796] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:07.037 [2024-11-04 12:19:41.353799] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.037 [2024-11-04 12:19:41.353806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:07.037 [2024-11-04 12:19:41.361752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:07.037 [2024-11-04 12:19:41.361767] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:07.037 [2024-11-04 12:19:41.361775] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:07.037 [2024-11-04 12:19:41.361783] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:07.037 [2024-11-04 12:19:41.361787] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:07.037 [2024-11-04 12:19:41.361791] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.037 [2024-11-04 12:19:41.361797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:07.037 [2024-11-04 12:19:41.369751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:07.037 [2024-11-04 12:19:41.369761] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:07.037 [2024-11-04 12:19:41.369768] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:07.037 [2024-11-04 12:19:41.369776] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:07.037 [2024-11-04 12:19:41.369782] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:07.037 [2024-11-04 12:19:41.369787] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:07.037 [2024-11-04 12:19:41.369792] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:07.037 [2024-11-04 12:19:41.369798] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:07.037 [2024-11-04 12:19:41.369802] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:07.037 [2024-11-04 12:19:41.369807] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:07.037 [2024-11-04 12:19:41.369823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:07.037 [2024-11-04 12:19:41.377751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:07.038 [2024-11-04 12:19:41.377765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:07.038 [2024-11-04 12:19:41.385753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:07.038 [2024-11-04 12:19:41.385766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:07.038 [2024-11-04 12:19:41.393751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:07.038 [2024-11-04 12:19:41.393764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:07.038 [2024-11-04 12:19:41.401751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:07.038 [2024-11-04 12:19:41.401767] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:07.038 [2024-11-04 12:19:41.401772] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:07.038 [2024-11-04 12:19:41.401776] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:07.038 [2024-11-04 12:19:41.401780] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:07.038 [2024-11-04 12:19:41.401783] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:07.038 [2024-11-04 12:19:41.401790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:07.038 [2024-11-04 12:19:41.401798] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:07.038 [2024-11-04 12:19:41.401802] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:07.038 [2024-11-04 12:19:41.401810] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.038 [2024-11-04 12:19:41.401816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:07.038 [2024-11-04 12:19:41.401824] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:07.038 [2024-11-04 12:19:41.401828] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:07.038 [2024-11-04 12:19:41.401832] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.038 [2024-11-04 12:19:41.401838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:07.038 [2024-11-04 12:19:41.401847] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:07.038 [2024-11-04 12:19:41.401852] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:07.038 [2024-11-04 12:19:41.401855] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.038 [2024-11-04 12:19:41.401861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:07.038 [2024-11-04 12:19:41.409753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:07.038 [2024-11-04 12:19:41.409767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:07.038 [2024-11-04 12:19:41.409778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:07.038 [2024-11-04 12:19:41.409786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:07.038 ===================================================== 00:15:07.038 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:07.038 ===================================================== 00:15:07.038 Controller Capabilities/Features 00:15:07.038 ================================ 00:15:07.038 Vendor ID: 4e58 00:15:07.038 Subsystem Vendor ID: 4e58 00:15:07.038 Serial Number: SPDK2 00:15:07.038 Model Number: SPDK bdev Controller 00:15:07.038 Firmware Version: 25.01 00:15:07.038 Recommended Arb Burst: 6 00:15:07.038 IEEE OUI Identifier: 8d 6b 50 00:15:07.038 Multi-path I/O 00:15:07.038 May have multiple subsystem ports: Yes 00:15:07.038 May have multiple controllers: Yes 00:15:07.038 Associated with SR-IOV VF: No 00:15:07.038 Max Data Transfer Size: 131072 00:15:07.038 Max Number of Namespaces: 32 00:15:07.038 Max Number of I/O Queues: 127 00:15:07.038 NVMe Specification Version (VS): 1.3 00:15:07.038 NVMe Specification Version (Identify): 1.3 00:15:07.038 Maximum Queue Entries: 256 00:15:07.038 Contiguous Queues Required: Yes 00:15:07.038 Arbitration Mechanisms Supported 00:15:07.038 Weighted Round Robin: Not Supported 00:15:07.038 Vendor Specific: Not Supported 00:15:07.038 Reset Timeout: 15000 ms 00:15:07.038 Doorbell Stride: 4 bytes 00:15:07.038 NVM Subsystem Reset: Not Supported 00:15:07.038 Command Sets Supported 00:15:07.038 NVM Command Set: Supported 00:15:07.038 Boot Partition: Not Supported 00:15:07.038 Memory Page Size Minimum: 4096 bytes 00:15:07.038 Memory Page Size Maximum: 4096 bytes 00:15:07.038 Persistent Memory Region: Not Supported 00:15:07.038 Optional Asynchronous Events Supported 00:15:07.038 Namespace Attribute Notices: Supported 00:15:07.038 Firmware Activation Notices: Not Supported 00:15:07.038 ANA Change Notices: Not Supported 00:15:07.038 PLE Aggregate Log Change Notices: Not Supported 00:15:07.038 LBA Status Info Alert Notices: Not Supported 00:15:07.038 EGE Aggregate Log Change Notices: Not Supported 00:15:07.038 Normal NVM Subsystem Shutdown event: Not Supported 00:15:07.038 Zone Descriptor Change Notices: Not Supported 00:15:07.038 Discovery Log Change Notices: Not Supported 00:15:07.038 Controller Attributes 00:15:07.038 128-bit Host Identifier: Supported 00:15:07.038 Non-Operational Permissive Mode: Not Supported 00:15:07.038 NVM Sets: Not Supported 00:15:07.038 Read Recovery Levels: Not Supported 00:15:07.038 Endurance Groups: Not Supported 00:15:07.038 Predictable Latency Mode: Not Supported 00:15:07.038 Traffic Based Keep ALive: Not Supported 00:15:07.038 Namespace Granularity: Not Supported 00:15:07.038 SQ Associations: Not Supported 00:15:07.038 UUID List: Not Supported 00:15:07.038 Multi-Domain Subsystem: Not Supported 00:15:07.038 Fixed Capacity Management: Not Supported 00:15:07.038 Variable Capacity Management: Not Supported 00:15:07.038 Delete Endurance Group: Not Supported 00:15:07.038 Delete NVM Set: Not Supported 00:15:07.038 Extended LBA Formats Supported: Not Supported 00:15:07.038 Flexible Data Placement Supported: Not Supported 00:15:07.038 00:15:07.038 Controller Memory Buffer Support 00:15:07.038 ================================ 00:15:07.038 Supported: No 00:15:07.038 00:15:07.038 Persistent Memory Region Support 00:15:07.038 ================================ 00:15:07.038 Supported: No 00:15:07.038 00:15:07.038 Admin Command Set Attributes 00:15:07.038 ============================ 00:15:07.038 Security Send/Receive: Not Supported 00:15:07.038 Format NVM: Not Supported 00:15:07.038 Firmware Activate/Download: Not Supported 00:15:07.038 Namespace Management: Not Supported 00:15:07.038 Device Self-Test: Not Supported 00:15:07.038 Directives: Not Supported 00:15:07.038 NVMe-MI: Not Supported 00:15:07.038 Virtualization Management: Not Supported 00:15:07.038 Doorbell Buffer Config: Not Supported 00:15:07.038 Get LBA Status Capability: Not Supported 00:15:07.038 Command & Feature Lockdown Capability: Not Supported 00:15:07.038 Abort Command Limit: 4 00:15:07.038 Async Event Request Limit: 4 00:15:07.038 Number of Firmware Slots: N/A 00:15:07.038 Firmware Slot 1 Read-Only: N/A 00:15:07.038 Firmware Activation Without Reset: N/A 00:15:07.038 Multiple Update Detection Support: N/A 00:15:07.038 Firmware Update Granularity: No Information Provided 00:15:07.038 Per-Namespace SMART Log: No 00:15:07.038 Asymmetric Namespace Access Log Page: Not Supported 00:15:07.038 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:07.038 Command Effects Log Page: Supported 00:15:07.039 Get Log Page Extended Data: Supported 00:15:07.039 Telemetry Log Pages: Not Supported 00:15:07.039 Persistent Event Log Pages: Not Supported 00:15:07.039 Supported Log Pages Log Page: May Support 00:15:07.039 Commands Supported & Effects Log Page: Not Supported 00:15:07.039 Feature Identifiers & Effects Log Page:May Support 00:15:07.039 NVMe-MI Commands & Effects Log Page: May Support 00:15:07.039 Data Area 4 for Telemetry Log: Not Supported 00:15:07.039 Error Log Page Entries Supported: 128 00:15:07.039 Keep Alive: Supported 00:15:07.039 Keep Alive Granularity: 10000 ms 00:15:07.039 00:15:07.039 NVM Command Set Attributes 00:15:07.039 ========================== 00:15:07.039 Submission Queue Entry Size 00:15:07.039 Max: 64 00:15:07.039 Min: 64 00:15:07.039 Completion Queue Entry Size 00:15:07.039 Max: 16 00:15:07.039 Min: 16 00:15:07.039 Number of Namespaces: 32 00:15:07.039 Compare Command: Supported 00:15:07.039 Write Uncorrectable Command: Not Supported 00:15:07.039 Dataset Management Command: Supported 00:15:07.039 Write Zeroes Command: Supported 00:15:07.039 Set Features Save Field: Not Supported 00:15:07.039 Reservations: Not Supported 00:15:07.039 Timestamp: Not Supported 00:15:07.039 Copy: Supported 00:15:07.039 Volatile Write Cache: Present 00:15:07.039 Atomic Write Unit (Normal): 1 00:15:07.039 Atomic Write Unit (PFail): 1 00:15:07.039 Atomic Compare & Write Unit: 1 00:15:07.039 Fused Compare & Write: Supported 00:15:07.039 Scatter-Gather List 00:15:07.039 SGL Command Set: Supported (Dword aligned) 00:15:07.039 SGL Keyed: Not Supported 00:15:07.039 SGL Bit Bucket Descriptor: Not Supported 00:15:07.039 SGL Metadata Pointer: Not Supported 00:15:07.039 Oversized SGL: Not Supported 00:15:07.039 SGL Metadata Address: Not Supported 00:15:07.039 SGL Offset: Not Supported 00:15:07.039 Transport SGL Data Block: Not Supported 00:15:07.039 Replay Protected Memory Block: Not Supported 00:15:07.039 00:15:07.039 Firmware Slot Information 00:15:07.039 ========================= 00:15:07.039 Active slot: 1 00:15:07.039 Slot 1 Firmware Revision: 25.01 00:15:07.039 00:15:07.039 00:15:07.039 Commands Supported and Effects 00:15:07.039 ============================== 00:15:07.039 Admin Commands 00:15:07.039 -------------- 00:15:07.039 Get Log Page (02h): Supported 00:15:07.039 Identify (06h): Supported 00:15:07.039 Abort (08h): Supported 00:15:07.039 Set Features (09h): Supported 00:15:07.039 Get Features (0Ah): Supported 00:15:07.039 Asynchronous Event Request (0Ch): Supported 00:15:07.039 Keep Alive (18h): Supported 00:15:07.039 I/O Commands 00:15:07.039 ------------ 00:15:07.039 Flush (00h): Supported LBA-Change 00:15:07.039 Write (01h): Supported LBA-Change 00:15:07.039 Read (02h): Supported 00:15:07.039 Compare (05h): Supported 00:15:07.039 Write Zeroes (08h): Supported LBA-Change 00:15:07.039 Dataset Management (09h): Supported LBA-Change 00:15:07.039 Copy (19h): Supported LBA-Change 00:15:07.039 00:15:07.039 Error Log 00:15:07.039 ========= 00:15:07.039 00:15:07.039 Arbitration 00:15:07.039 =========== 00:15:07.039 Arbitration Burst: 1 00:15:07.039 00:15:07.039 Power Management 00:15:07.039 ================ 00:15:07.039 Number of Power States: 1 00:15:07.039 Current Power State: Power State #0 00:15:07.039 Power State #0: 00:15:07.039 Max Power: 0.00 W 00:15:07.039 Non-Operational State: Operational 00:15:07.039 Entry Latency: Not Reported 00:15:07.039 Exit Latency: Not Reported 00:15:07.039 Relative Read Throughput: 0 00:15:07.039 Relative Read Latency: 0 00:15:07.039 Relative Write Throughput: 0 00:15:07.039 Relative Write Latency: 0 00:15:07.039 Idle Power: Not Reported 00:15:07.039 Active Power: Not Reported 00:15:07.039 Non-Operational Permissive Mode: Not Supported 00:15:07.039 00:15:07.039 Health Information 00:15:07.039 ================== 00:15:07.039 Critical Warnings: 00:15:07.039 Available Spare Space: OK 00:15:07.039 Temperature: OK 00:15:07.039 Device Reliability: OK 00:15:07.039 Read Only: No 00:15:07.039 Volatile Memory Backup: OK 00:15:07.039 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:07.039 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:07.039 Available Spare: 0% 00:15:07.039 Available Sp[2024-11-04 12:19:41.409883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:07.039 [2024-11-04 12:19:41.417753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:07.039 [2024-11-04 12:19:41.417783] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:07.039 [2024-11-04 12:19:41.417792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.039 [2024-11-04 12:19:41.417799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.039 [2024-11-04 12:19:41.417805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.039 [2024-11-04 12:19:41.417812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.039 [2024-11-04 12:19:41.417861] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:07.039 [2024-11-04 12:19:41.417871] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:07.039 [2024-11-04 12:19:41.418867] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:07.039 [2024-11-04 12:19:41.418916] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:07.039 [2024-11-04 12:19:41.418923] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:07.039 [2024-11-04 12:19:41.419877] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:07.039 [2024-11-04 12:19:41.419889] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:07.039 [2024-11-04 12:19:41.419941] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:07.039 [2024-11-04 12:19:41.421317] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:07.039 are Threshold: 0% 00:15:07.039 Life Percentage Used: 0% 00:15:07.039 Data Units Read: 0 00:15:07.039 Data Units Written: 0 00:15:07.039 Host Read Commands: 0 00:15:07.039 Host Write Commands: 0 00:15:07.039 Controller Busy Time: 0 minutes 00:15:07.039 Power Cycles: 0 00:15:07.039 Power On Hours: 0 hours 00:15:07.039 Unsafe Shutdowns: 0 00:15:07.039 Unrecoverable Media Errors: 0 00:15:07.039 Lifetime Error Log Entries: 0 00:15:07.039 Warning Temperature Time: 0 minutes 00:15:07.039 Critical Temperature Time: 0 minutes 00:15:07.039 00:15:07.039 Number of Queues 00:15:07.039 ================ 00:15:07.039 Number of I/O Submission Queues: 127 00:15:07.039 Number of I/O Completion Queues: 127 00:15:07.039 00:15:07.039 Active Namespaces 00:15:07.039 ================= 00:15:07.039 Namespace ID:1 00:15:07.039 Error Recovery Timeout: Unlimited 00:15:07.039 Command Set Identifier: NVM (00h) 00:15:07.039 Deallocate: Supported 00:15:07.039 Deallocated/Unwritten Error: Not Supported 00:15:07.040 Deallocated Read Value: Unknown 00:15:07.040 Deallocate in Write Zeroes: Not Supported 00:15:07.040 Deallocated Guard Field: 0xFFFF 00:15:07.040 Flush: Supported 00:15:07.040 Reservation: Supported 00:15:07.040 Namespace Sharing Capabilities: Multiple Controllers 00:15:07.040 Size (in LBAs): 131072 (0GiB) 00:15:07.040 Capacity (in LBAs): 131072 (0GiB) 00:15:07.040 Utilization (in LBAs): 131072 (0GiB) 00:15:07.040 NGUID: A208F738FC724CC1B94CBC03713F2540 00:15:07.040 UUID: a208f738-fc72-4cc1-b94c-bc03713f2540 00:15:07.040 Thin Provisioning: Not Supported 00:15:07.040 Per-NS Atomic Units: Yes 00:15:07.040 Atomic Boundary Size (Normal): 0 00:15:07.040 Atomic Boundary Size (PFail): 0 00:15:07.040 Atomic Boundary Offset: 0 00:15:07.040 Maximum Single Source Range Length: 65535 00:15:07.040 Maximum Copy Length: 65535 00:15:07.040 Maximum Source Range Count: 1 00:15:07.040 NGUID/EUI64 Never Reused: No 00:15:07.040 Namespace Write Protected: No 00:15:07.040 Number of LBA Formats: 1 00:15:07.040 Current LBA Format: LBA Format #00 00:15:07.040 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:07.040 00:15:07.040 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:07.300 [2024-11-04 12:19:41.614802] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:12.586 Initializing NVMe Controllers 00:15:12.586 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:12.586 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:12.586 Initialization complete. Launching workers. 00:15:12.586 ======================================================== 00:15:12.586 Latency(us) 00:15:12.586 Device Information : IOPS MiB/s Average min max 00:15:12.586 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39997.60 156.24 3200.25 841.60 6825.10 00:15:12.586 ======================================================== 00:15:12.586 Total : 39997.60 156.24 3200.25 841.60 6825.10 00:15:12.586 00:15:12.586 [2024-11-04 12:19:46.727928] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:12.586 12:19:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:12.586 [2024-11-04 12:19:46.916527] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.878 Initializing NVMe Controllers 00:15:17.878 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:17.878 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:17.878 Initialization complete. Launching workers. 00:15:17.878 ======================================================== 00:15:17.878 Latency(us) 00:15:17.878 Device Information : IOPS MiB/s Average min max 00:15:17.878 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35107.60 137.14 3647.30 1106.01 8010.15 00:15:17.878 ======================================================== 00:15:17.878 Total : 35107.60 137.14 3647.30 1106.01 8010.15 00:15:17.878 00:15:17.878 [2024-11-04 12:19:51.938330] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.878 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:17.878 [2024-11-04 12:19:52.128435] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:23.173 [2024-11-04 12:19:57.273843] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:23.173 Initializing NVMe Controllers 00:15:23.173 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:23.173 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:23.173 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:23.173 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:23.173 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:23.173 Initialization complete. Launching workers. 00:15:23.173 Starting thread on core 2 00:15:23.173 Starting thread on core 3 00:15:23.173 Starting thread on core 1 00:15:23.173 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:23.173 [2024-11-04 12:19:57.525066] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:26.476 [2024-11-04 12:20:00.582516] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:26.476 Initializing NVMe Controllers 00:15:26.476 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:26.476 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:26.476 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:26.476 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:26.476 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:26.476 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:26.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:26.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:26.476 Initialization complete. Launching workers. 00:15:26.476 Starting thread on core 1 with urgent priority queue 00:15:26.476 Starting thread on core 2 with urgent priority queue 00:15:26.476 Starting thread on core 3 with urgent priority queue 00:15:26.476 Starting thread on core 0 with urgent priority queue 00:15:26.476 SPDK bdev Controller (SPDK2 ) core 0: 9349.67 IO/s 10.70 secs/100000 ios 00:15:26.476 SPDK bdev Controller (SPDK2 ) core 1: 10847.33 IO/s 9.22 secs/100000 ios 00:15:26.476 SPDK bdev Controller (SPDK2 ) core 2: 11212.67 IO/s 8.92 secs/100000 ios 00:15:26.476 SPDK bdev Controller (SPDK2 ) core 3: 9146.00 IO/s 10.93 secs/100000 ios 00:15:26.476 ======================================================== 00:15:26.476 00:15:26.476 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:26.476 [2024-11-04 12:20:00.846186] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:26.476 Initializing NVMe Controllers 00:15:26.476 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:26.476 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:26.476 Namespace ID: 1 size: 0GB 00:15:26.476 Initialization complete. 00:15:26.476 INFO: using host memory buffer for IO 00:15:26.476 Hello world! 00:15:26.476 [2024-11-04 12:20:00.858266] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:26.476 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:26.737 [2024-11-04 12:20:01.116997] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:27.678 Initializing NVMe Controllers 00:15:27.678 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:27.678 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:27.679 Initialization complete. Launching workers. 00:15:27.679 submit (in ns) avg, min, max = 11237.7, 3955.0, 4000842.5 00:15:27.679 complete (in ns) avg, min, max = 17145.9, 2391.7, 3998471.7 00:15:27.679 00:15:27.679 Submit histogram 00:15:27.679 ================ 00:15:27.679 Range in us Cumulative Count 00:15:27.679 3.947 - 3.973: 0.8857% ( 169) 00:15:27.679 3.973 - 4.000: 4.6903% ( 726) 00:15:27.679 4.000 - 4.027: 13.1276% ( 1610) 00:15:27.679 4.027 - 4.053: 25.0655% ( 2278) 00:15:27.679 4.053 - 4.080: 36.0182% ( 2090) 00:15:27.679 4.080 - 4.107: 45.9543% ( 1896) 00:15:27.679 4.107 - 4.133: 60.5492% ( 2785) 00:15:27.679 4.133 - 4.160: 75.8516% ( 2920) 00:15:27.679 4.160 - 4.187: 88.1249% ( 2342) 00:15:27.679 4.187 - 4.213: 95.2521% ( 1360) 00:15:27.679 4.213 - 4.240: 98.0558% ( 535) 00:15:27.679 4.240 - 4.267: 99.0148% ( 183) 00:15:27.679 4.267 - 4.293: 99.2820% ( 51) 00:15:27.679 4.293 - 4.320: 99.3449% ( 12) 00:15:27.679 4.320 - 4.347: 99.3607% ( 3) 00:15:27.679 4.347 - 4.373: 99.3659% ( 1) 00:15:27.679 4.480 - 4.507: 99.3711% ( 1) 00:15:27.679 4.720 - 4.747: 99.3764% ( 1) 00:15:27.679 4.907 - 4.933: 99.3816% ( 1) 00:15:27.679 5.013 - 5.040: 99.3869% ( 1) 00:15:27.679 5.493 - 5.520: 99.3921% ( 1) 00:15:27.679 5.600 - 5.627: 99.3973% ( 1) 00:15:27.679 5.760 - 5.787: 99.4026% ( 1) 00:15:27.679 5.947 - 5.973: 99.4078% ( 1) 00:15:27.679 6.027 - 6.053: 99.4131% ( 1) 00:15:27.679 6.053 - 6.080: 99.4235% ( 2) 00:15:27.679 6.080 - 6.107: 99.4393% ( 3) 00:15:27.679 6.133 - 6.160: 99.4445% ( 1) 00:15:27.679 6.160 - 6.187: 99.4550% ( 2) 00:15:27.679 6.187 - 6.213: 99.4602% ( 1) 00:15:27.679 6.213 - 6.240: 99.4759% ( 3) 00:15:27.679 6.240 - 6.267: 99.4812% ( 1) 00:15:27.679 6.267 - 6.293: 99.4917% ( 2) 00:15:27.679 6.320 - 6.347: 99.4969% ( 1) 00:15:27.679 6.347 - 6.373: 99.5074% ( 2) 00:15:27.679 6.373 - 6.400: 99.5179% ( 2) 00:15:27.679 6.400 - 6.427: 99.5284% ( 2) 00:15:27.679 6.480 - 6.507: 99.5388% ( 2) 00:15:27.679 6.507 - 6.533: 99.5598% ( 4) 00:15:27.679 6.560 - 6.587: 99.5755% ( 3) 00:15:27.679 6.587 - 6.613: 99.5965% ( 4) 00:15:27.679 6.613 - 6.640: 99.6017% ( 1) 00:15:27.679 6.640 - 6.667: 99.6070% ( 1) 00:15:27.679 6.667 - 6.693: 99.6174% ( 2) 00:15:27.679 6.693 - 6.720: 99.6332% ( 3) 00:15:27.679 6.720 - 6.747: 99.6384% ( 1) 00:15:27.679 6.747 - 6.773: 99.6489% ( 2) 00:15:27.679 6.773 - 6.800: 99.6541% ( 1) 00:15:27.679 6.827 - 6.880: 99.6594% ( 1) 00:15:27.679 6.880 - 6.933: 99.6698% ( 2) 00:15:27.679 6.933 - 6.987: 99.6803% ( 2) 00:15:27.679 6.987 - 7.040: 99.6908% ( 2) 00:15:27.679 7.040 - 7.093: 99.7013% ( 2) 00:15:27.679 7.093 - 7.147: 99.7065% ( 1) 00:15:27.679 7.147 - 7.200: 99.7118% ( 1) 00:15:27.679 7.200 - 7.253: 99.7170% ( 1) 00:15:27.679 7.307 - 7.360: 99.7275% ( 2) 00:15:27.679 7.360 - 7.413: 99.7432% ( 3) 00:15:27.679 7.413 - 7.467: 99.7589% ( 3) 00:15:27.679 7.520 - 7.573: 99.7642% ( 1) 00:15:27.679 7.627 - 7.680: 99.7694% ( 1) 00:15:27.679 7.893 - 7.947: 99.7747% ( 1) 00:15:27.679 8.000 - 8.053: 99.7799% ( 1) 00:15:27.679 8.053 - 8.107: 99.7851% ( 1) 00:15:27.679 8.160 - 8.213: 99.7904% ( 1) 00:15:27.679 8.320 - 8.373: 99.7956% ( 1) 00:15:27.679 9.173 - 9.227: 99.8009% ( 1) 00:15:27.679 10.347 - 10.400: 99.8061% ( 1) 00:15:27.679 12.213 - 12.267: 99.8113% ( 1) 00:15:27.679 14.827 - 14.933: 99.8166% ( 1) 00:15:27.679 15.467 - 15.573: 99.8218% ( 1) 00:15:27.679 3986.773 - 4014.080: 100.0000% ( 34) 00:15:27.679 00:15:27.679 Complete histogram 00:15:27.679 ================== 00:15:27.679 Range in us Cumulative Count 00:15:27.679 2.387 - 2.400: 0.3354% ( 64) 00:15:27.679 2.400 - 2.413: 0.9014% ( 108) 00:15:27.679 2.413 - 2.427: 0.9852% ( 16) 00:15:27.679 2.427 - 2.440: 1.0743% ( 17) 00:15:27.679 2.440 - 2.453: 2.7932% ( 328) 00:15:27.679 2.453 - [2024-11-04 12:20:02.216432] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:27.941 2.467: 39.9434% ( 7089) 00:15:27.941 2.467 - 2.480: 48.8471% ( 1699) 00:15:27.941 2.480 - 2.493: 70.0189% ( 4040) 00:15:27.941 2.493 - 2.507: 77.4552% ( 1419) 00:15:27.941 2.507 - 2.520: 81.0135% ( 679) 00:15:27.941 2.520 - 2.533: 84.8444% ( 731) 00:15:27.941 2.533 - 2.547: 89.9748% ( 979) 00:15:27.941 2.547 - 2.560: 94.1254% ( 792) 00:15:27.941 2.560 - 2.573: 96.8976% ( 529) 00:15:27.941 2.573 - 2.587: 98.5326% ( 312) 00:15:27.941 2.587 - 2.600: 99.1720% ( 122) 00:15:27.941 2.600 - 2.613: 99.3711% ( 38) 00:15:27.941 2.613 - 2.627: 99.4131% ( 8) 00:15:27.941 2.627 - 2.640: 99.4288% ( 3) 00:15:27.941 3.013 - 3.027: 99.4340% ( 1) 00:15:27.941 3.107 - 3.120: 99.4393% ( 1) 00:15:27.941 4.560 - 4.587: 99.4445% ( 1) 00:15:27.941 4.613 - 4.640: 99.4550% ( 2) 00:15:27.941 4.640 - 4.667: 99.4602% ( 1) 00:15:27.941 4.720 - 4.747: 99.4655% ( 1) 00:15:27.941 4.853 - 4.880: 99.4759% ( 2) 00:15:27.941 4.907 - 4.933: 99.4864% ( 2) 00:15:27.941 4.933 - 4.960: 99.4917% ( 1) 00:15:27.941 4.960 - 4.987: 99.5021% ( 2) 00:15:27.941 4.987 - 5.013: 99.5179% ( 3) 00:15:27.941 5.013 - 5.040: 99.5284% ( 2) 00:15:27.941 5.093 - 5.120: 99.5388% ( 2) 00:15:27.941 5.173 - 5.200: 99.5441% ( 1) 00:15:27.941 5.253 - 5.280: 99.5493% ( 1) 00:15:27.941 5.280 - 5.307: 99.5546% ( 1) 00:15:27.941 5.333 - 5.360: 99.5598% ( 1) 00:15:27.941 5.467 - 5.493: 99.5650% ( 1) 00:15:27.941 5.520 - 5.547: 99.5755% ( 2) 00:15:27.941 5.547 - 5.573: 99.5808% ( 1) 00:15:27.941 5.627 - 5.653: 99.5860% ( 1) 00:15:27.941 5.680 - 5.707: 99.5912% ( 1) 00:15:27.941 5.733 - 5.760: 99.5965% ( 1) 00:15:27.941 5.840 - 5.867: 99.6070% ( 2) 00:15:27.941 5.867 - 5.893: 99.6174% ( 2) 00:15:27.941 6.880 - 6.933: 99.6227% ( 1) 00:15:27.941 10.080 - 10.133: 99.6279% ( 1) 00:15:27.941 13.867 - 13.973: 99.6332% ( 1) 00:15:27.941 3986.773 - 4014.080: 100.0000% ( 70) 00:15:27.941 00:15:27.941 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:27.941 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:27.941 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:27.941 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:27.941 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:27.941 [ 00:15:27.941 { 00:15:27.941 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:27.941 "subtype": "Discovery", 00:15:27.941 "listen_addresses": [], 00:15:27.941 "allow_any_host": true, 00:15:27.941 "hosts": [] 00:15:27.941 }, 00:15:27.941 { 00:15:27.941 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:27.941 "subtype": "NVMe", 00:15:27.941 "listen_addresses": [ 00:15:27.941 { 00:15:27.941 "trtype": "VFIOUSER", 00:15:27.941 "adrfam": "IPv4", 00:15:27.941 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:27.941 "trsvcid": "0" 00:15:27.941 } 00:15:27.941 ], 00:15:27.941 "allow_any_host": true, 00:15:27.941 "hosts": [], 00:15:27.941 "serial_number": "SPDK1", 00:15:27.941 "model_number": "SPDK bdev Controller", 00:15:27.941 "max_namespaces": 32, 00:15:27.941 "min_cntlid": 1, 00:15:27.941 "max_cntlid": 65519, 00:15:27.941 "namespaces": [ 00:15:27.941 { 00:15:27.941 "nsid": 1, 00:15:27.941 "bdev_name": "Malloc1", 00:15:27.941 "name": "Malloc1", 00:15:27.941 "nguid": "C89194892734416B890EF85E691F0919", 00:15:27.941 "uuid": "c8919489-2734-416b-890e-f85e691f0919" 00:15:27.941 }, 00:15:27.941 { 00:15:27.941 "nsid": 2, 00:15:27.941 "bdev_name": "Malloc3", 00:15:27.941 "name": "Malloc3", 00:15:27.941 "nguid": "BB0BF048B3854199A0EDFFAAF7C6FD03", 00:15:27.941 "uuid": "bb0bf048-b385-4199-a0ed-ffaaf7c6fd03" 00:15:27.941 } 00:15:27.941 ] 00:15:27.941 }, 00:15:27.941 { 00:15:27.941 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:27.941 "subtype": "NVMe", 00:15:27.941 "listen_addresses": [ 00:15:27.941 { 00:15:27.941 "trtype": "VFIOUSER", 00:15:27.941 "adrfam": "IPv4", 00:15:27.941 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:27.941 "trsvcid": "0" 00:15:27.941 } 00:15:27.941 ], 00:15:27.941 "allow_any_host": true, 00:15:27.941 "hosts": [], 00:15:27.941 "serial_number": "SPDK2", 00:15:27.941 "model_number": "SPDK bdev Controller", 00:15:27.941 "max_namespaces": 32, 00:15:27.941 "min_cntlid": 1, 00:15:27.941 "max_cntlid": 65519, 00:15:27.941 "namespaces": [ 00:15:27.941 { 00:15:27.941 "nsid": 1, 00:15:27.941 "bdev_name": "Malloc2", 00:15:27.941 "name": "Malloc2", 00:15:27.941 "nguid": "A208F738FC724CC1B94CBC03713F2540", 00:15:27.941 "uuid": "a208f738-fc72-4cc1-b94c-bc03713f2540" 00:15:27.941 } 00:15:27.941 ] 00:15:27.941 } 00:15:27.941 ] 00:15:27.941 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:27.941 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1603268 00:15:27.941 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:27.941 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:27.941 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:27.941 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:27.941 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:27.941 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:27.941 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:27.941 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:28.201 [2024-11-04 12:20:02.611400] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:28.201 Malloc4 00:15:28.202 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:28.462 [2024-11-04 12:20:02.781550] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:28.462 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:28.462 Asynchronous Event Request test 00:15:28.462 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:28.462 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:28.462 Registering asynchronous event callbacks... 00:15:28.462 Starting namespace attribute notice tests for all controllers... 00:15:28.462 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:28.462 aer_cb - Changed Namespace 00:15:28.462 Cleaning up... 00:15:28.462 [ 00:15:28.462 { 00:15:28.462 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:28.462 "subtype": "Discovery", 00:15:28.462 "listen_addresses": [], 00:15:28.462 "allow_any_host": true, 00:15:28.462 "hosts": [] 00:15:28.462 }, 00:15:28.462 { 00:15:28.462 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:28.462 "subtype": "NVMe", 00:15:28.462 "listen_addresses": [ 00:15:28.462 { 00:15:28.462 "trtype": "VFIOUSER", 00:15:28.462 "adrfam": "IPv4", 00:15:28.462 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:28.462 "trsvcid": "0" 00:15:28.462 } 00:15:28.462 ], 00:15:28.462 "allow_any_host": true, 00:15:28.462 "hosts": [], 00:15:28.462 "serial_number": "SPDK1", 00:15:28.462 "model_number": "SPDK bdev Controller", 00:15:28.462 "max_namespaces": 32, 00:15:28.462 "min_cntlid": 1, 00:15:28.462 "max_cntlid": 65519, 00:15:28.462 "namespaces": [ 00:15:28.462 { 00:15:28.462 "nsid": 1, 00:15:28.462 "bdev_name": "Malloc1", 00:15:28.462 "name": "Malloc1", 00:15:28.462 "nguid": "C89194892734416B890EF85E691F0919", 00:15:28.462 "uuid": "c8919489-2734-416b-890e-f85e691f0919" 00:15:28.462 }, 00:15:28.462 { 00:15:28.462 "nsid": 2, 00:15:28.462 "bdev_name": "Malloc3", 00:15:28.462 "name": "Malloc3", 00:15:28.462 "nguid": "BB0BF048B3854199A0EDFFAAF7C6FD03", 00:15:28.462 "uuid": "bb0bf048-b385-4199-a0ed-ffaaf7c6fd03" 00:15:28.462 } 00:15:28.462 ] 00:15:28.462 }, 00:15:28.462 { 00:15:28.462 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:28.462 "subtype": "NVMe", 00:15:28.462 "listen_addresses": [ 00:15:28.462 { 00:15:28.462 "trtype": "VFIOUSER", 00:15:28.462 "adrfam": "IPv4", 00:15:28.462 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:28.462 "trsvcid": "0" 00:15:28.462 } 00:15:28.462 ], 00:15:28.462 "allow_any_host": true, 00:15:28.462 "hosts": [], 00:15:28.462 "serial_number": "SPDK2", 00:15:28.462 "model_number": "SPDK bdev Controller", 00:15:28.462 "max_namespaces": 32, 00:15:28.462 "min_cntlid": 1, 00:15:28.462 "max_cntlid": 65519, 00:15:28.462 "namespaces": [ 00:15:28.462 { 00:15:28.462 "nsid": 1, 00:15:28.462 "bdev_name": "Malloc2", 00:15:28.462 "name": "Malloc2", 00:15:28.462 "nguid": "A208F738FC724CC1B94CBC03713F2540", 00:15:28.462 "uuid": "a208f738-fc72-4cc1-b94c-bc03713f2540" 00:15:28.462 }, 00:15:28.462 { 00:15:28.462 "nsid": 2, 00:15:28.462 "bdev_name": "Malloc4", 00:15:28.462 "name": "Malloc4", 00:15:28.462 "nguid": "EE8830E151B24FD6B94B083A4A864C2D", 00:15:28.462 "uuid": "ee8830e1-51b2-4fd6-b94b-083a4a864c2d" 00:15:28.462 } 00:15:28.462 ] 00:15:28.462 } 00:15:28.462 ] 00:15:28.462 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1603268 00:15:28.462 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:28.462 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1594163 00:15:28.462 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1594163 ']' 00:15:28.462 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1594163 00:15:28.462 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:28.462 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.462 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1594163 00:15:28.722 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1594163' 00:15:28.723 killing process with pid 1594163 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1594163 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1594163 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1603421 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1603421' 00:15:28.723 Process pid: 1603421 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1603421 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1603421 ']' 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:28.723 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:28.723 [2024-11-04 12:20:03.275518] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:28.723 [2024-11-04 12:20:03.276416] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:15:28.723 [2024-11-04 12:20:03.276456] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.983 [2024-11-04 12:20:03.340195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:28.983 [2024-11-04 12:20:03.374983] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.983 [2024-11-04 12:20:03.375018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.983 [2024-11-04 12:20:03.375027] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.984 [2024-11-04 12:20:03.375033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.984 [2024-11-04 12:20:03.375039] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.984 [2024-11-04 12:20:03.376518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.984 [2024-11-04 12:20:03.376640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.984 [2024-11-04 12:20:03.376659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:28.984 [2024-11-04 12:20:03.376662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.984 [2024-11-04 12:20:03.431475] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:28.984 [2024-11-04 12:20:03.431556] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:28.984 [2024-11-04 12:20:03.432540] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:28.984 [2024-11-04 12:20:03.433483] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:28.984 [2024-11-04 12:20:03.433556] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:28.984 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:28.984 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:28.984 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:29.926 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:30.187 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:30.187 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:30.187 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:30.187 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:30.187 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:30.447 Malloc1 00:15:30.447 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:30.707 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:30.967 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:30.967 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:30.967 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:30.967 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:31.227 Malloc2 00:15:31.227 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:31.488 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:31.488 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:31.749 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:31.749 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1603421 00:15:31.749 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1603421 ']' 00:15:31.749 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1603421 00:15:31.749 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:31.749 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:31.749 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1603421 00:15:31.749 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:31.749 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:31.749 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1603421' 00:15:31.749 killing process with pid 1603421 00:15:31.749 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1603421 00:15:31.749 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1603421 00:15:32.010 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:32.010 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:32.010 00:15:32.010 real 0m50.569s 00:15:32.010 user 3m15.819s 00:15:32.010 sys 0m2.732s 00:15:32.010 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:32.010 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:32.010 ************************************ 00:15:32.010 END TEST nvmf_vfio_user 00:15:32.010 ************************************ 00:15:32.010 12:20:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:32.010 12:20:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:32.010 12:20:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:32.010 12:20:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:32.010 ************************************ 00:15:32.010 START TEST nvmf_vfio_user_nvme_compliance 00:15:32.010 ************************************ 00:15:32.010 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:32.271 * Looking for test storage... 00:15:32.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:32.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.272 --rc genhtml_branch_coverage=1 00:15:32.272 --rc genhtml_function_coverage=1 00:15:32.272 --rc genhtml_legend=1 00:15:32.272 --rc geninfo_all_blocks=1 00:15:32.272 --rc geninfo_unexecuted_blocks=1 00:15:32.272 00:15:32.272 ' 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:32.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.272 --rc genhtml_branch_coverage=1 00:15:32.272 --rc genhtml_function_coverage=1 00:15:32.272 --rc genhtml_legend=1 00:15:32.272 --rc geninfo_all_blocks=1 00:15:32.272 --rc geninfo_unexecuted_blocks=1 00:15:32.272 00:15:32.272 ' 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:32.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.272 --rc genhtml_branch_coverage=1 00:15:32.272 --rc genhtml_function_coverage=1 00:15:32.272 --rc genhtml_legend=1 00:15:32.272 --rc geninfo_all_blocks=1 00:15:32.272 --rc geninfo_unexecuted_blocks=1 00:15:32.272 00:15:32.272 ' 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:32.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.272 --rc genhtml_branch_coverage=1 00:15:32.272 --rc genhtml_function_coverage=1 00:15:32.272 --rc genhtml_legend=1 00:15:32.272 --rc geninfo_all_blocks=1 00:15:32.272 --rc geninfo_unexecuted_blocks=1 00:15:32.272 00:15:32.272 ' 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:32.272 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:32.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1604165 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1604165' 00:15:32.273 Process pid: 1604165 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1604165 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1604165 ']' 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.273 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:32.273 [2024-11-04 12:20:06.786297] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:15:32.273 [2024-11-04 12:20:06.786348] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.534 [2024-11-04 12:20:06.847752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:32.534 [2024-11-04 12:20:06.882363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.534 [2024-11-04 12:20:06.882398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.534 [2024-11-04 12:20:06.882406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.534 [2024-11-04 12:20:06.882414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.534 [2024-11-04 12:20:06.882420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.534 [2024-11-04 12:20:06.883862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.534 [2024-11-04 12:20:06.884085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.534 [2024-11-04 12:20:06.884089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.534 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:32.534 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:32.534 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:33.477 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:33.477 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:33.477 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:33.477 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.477 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:33.477 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.477 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:33.477 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:33.477 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.477 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:33.477 malloc0 00:15:33.477 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.477 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:33.477 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.477 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:33.477 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.477 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:33.477 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.477 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:33.477 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.738 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:33.738 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.738 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:33.738 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.738 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:33.738 00:15:33.738 00:15:33.738 CUnit - A unit testing framework for C - Version 2.1-3 00:15:33.738 http://cunit.sourceforge.net/ 00:15:33.738 00:15:33.738 00:15:33.738 Suite: nvme_compliance 00:15:33.738 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-04 12:20:08.225703] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.738 [2024-11-04 12:20:08.227066] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:33.738 [2024-11-04 12:20:08.227078] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:33.738 [2024-11-04 12:20:08.227083] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:33.738 [2024-11-04 12:20:08.228725] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.738 passed 00:15:33.997 Test: admin_identify_ctrlr_verify_fused ...[2024-11-04 12:20:08.322328] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.997 [2024-11-04 12:20:08.325350] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.997 passed 00:15:33.997 Test: admin_identify_ns ...[2024-11-04 12:20:08.422512] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.997 [2024-11-04 12:20:08.481761] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:33.997 [2024-11-04 12:20:08.489757] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:33.997 [2024-11-04 12:20:08.510872] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.997 passed 00:15:34.257 Test: admin_get_features_mandatory_features ...[2024-11-04 12:20:08.602851] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.257 [2024-11-04 12:20:08.605871] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.257 passed 00:15:34.257 Test: admin_get_features_optional_features ...[2024-11-04 12:20:08.700444] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.257 [2024-11-04 12:20:08.703463] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.257 passed 00:15:34.257 Test: admin_set_features_number_of_queues ...[2024-11-04 12:20:08.794579] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.518 [2024-11-04 12:20:08.902860] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.518 passed 00:15:34.518 Test: admin_get_log_page_mandatory_logs ...[2024-11-04 12:20:08.996506] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.518 [2024-11-04 12:20:08.999518] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.518 passed 00:15:34.779 Test: admin_get_log_page_with_lpo ...[2024-11-04 12:20:09.091979] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.779 [2024-11-04 12:20:09.159764] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:34.779 [2024-11-04 12:20:09.172805] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.779 passed 00:15:34.779 Test: fabric_property_get ...[2024-11-04 12:20:09.267363] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.779 [2024-11-04 12:20:09.268610] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:34.779 [2024-11-04 12:20:09.270381] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.779 passed 00:15:35.040 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-04 12:20:09.364912] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.040 [2024-11-04 12:20:09.366152] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:35.040 [2024-11-04 12:20:09.367931] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.040 passed 00:15:35.040 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-04 12:20:09.460993] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.040 [2024-11-04 12:20:09.544756] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:35.040 [2024-11-04 12:20:09.560750] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:35.040 [2024-11-04 12:20:09.565829] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.040 passed 00:15:35.300 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-04 12:20:09.659845] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.300 [2024-11-04 12:20:09.661088] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:35.300 [2024-11-04 12:20:09.662864] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.300 passed 00:15:35.300 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-04 12:20:09.755975] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.300 [2024-11-04 12:20:09.831751] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:35.300 [2024-11-04 12:20:09.855755] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:35.300 [2024-11-04 12:20:09.860841] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.560 passed 00:15:35.560 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-04 12:20:09.954178] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.560 [2024-11-04 12:20:09.955426] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:35.560 [2024-11-04 12:20:09.955448] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:35.560 [2024-11-04 12:20:09.957199] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.560 passed 00:15:35.560 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-04 12:20:10.049301] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.822 [2024-11-04 12:20:10.144755] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:35.822 [2024-11-04 12:20:10.152751] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:35.822 [2024-11-04 12:20:10.160767] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:35.822 [2024-11-04 12:20:10.168756] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:35.822 [2024-11-04 12:20:10.197841] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.822 passed 00:15:35.822 Test: admin_create_io_sq_verify_pc ...[2024-11-04 12:20:10.287422] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.822 [2024-11-04 12:20:10.302760] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:35.822 [2024-11-04 12:20:10.320558] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.822 passed 00:15:36.083 Test: admin_create_io_qp_max_qps ...[2024-11-04 12:20:10.414106] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.025 [2024-11-04 12:20:11.523758] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:37.597 [2024-11-04 12:20:11.915521] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.597 passed 00:15:37.598 Test: admin_create_io_sq_shared_cq ...[2024-11-04 12:20:12.006681] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.598 [2024-11-04 12:20:12.138755] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:37.858 [2024-11-04 12:20:12.175808] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.858 passed 00:15:37.858 00:15:37.858 Run Summary: Type Total Ran Passed Failed Inactive 00:15:37.858 suites 1 1 n/a 0 0 00:15:37.858 tests 18 18 18 0 0 00:15:37.858 asserts 360 360 360 0 n/a 00:15:37.858 00:15:37.858 Elapsed time = 1.656 seconds 00:15:37.858 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1604165 00:15:37.858 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1604165 ']' 00:15:37.859 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1604165 00:15:37.859 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:37.859 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:37.859 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1604165 00:15:37.859 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:37.859 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:37.859 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1604165' 00:15:37.859 killing process with pid 1604165 00:15:37.859 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1604165 00:15:37.859 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1604165 00:15:37.859 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:37.859 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:37.859 00:15:37.859 real 0m5.922s 00:15:37.859 user 0m16.655s 00:15:37.859 sys 0m0.495s 00:15:37.859 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:37.859 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:37.859 ************************************ 00:15:37.859 END TEST nvmf_vfio_user_nvme_compliance 00:15:37.859 ************************************ 00:15:38.120 12:20:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:38.120 12:20:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:38.120 12:20:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:38.120 12:20:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:38.120 ************************************ 00:15:38.120 START TEST nvmf_vfio_user_fuzz 00:15:38.120 ************************************ 00:15:38.120 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:38.120 * Looking for test storage... 00:15:38.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:38.121 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:38.382 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:38.382 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:38.382 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:38.382 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:38.382 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:38.382 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:38.382 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:38.382 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:38.382 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:38.382 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:38.382 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:38.382 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:38.382 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:38.382 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:38.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.382 --rc genhtml_branch_coverage=1 00:15:38.382 --rc genhtml_function_coverage=1 00:15:38.382 --rc genhtml_legend=1 00:15:38.382 --rc geninfo_all_blocks=1 00:15:38.382 --rc geninfo_unexecuted_blocks=1 00:15:38.382 00:15:38.382 ' 00:15:38.382 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:38.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.382 --rc genhtml_branch_coverage=1 00:15:38.382 --rc genhtml_function_coverage=1 00:15:38.382 --rc genhtml_legend=1 00:15:38.382 --rc geninfo_all_blocks=1 00:15:38.382 --rc geninfo_unexecuted_blocks=1 00:15:38.382 00:15:38.382 ' 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:38.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.383 --rc genhtml_branch_coverage=1 00:15:38.383 --rc genhtml_function_coverage=1 00:15:38.383 --rc genhtml_legend=1 00:15:38.383 --rc geninfo_all_blocks=1 00:15:38.383 --rc geninfo_unexecuted_blocks=1 00:15:38.383 00:15:38.383 ' 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:38.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.383 --rc genhtml_branch_coverage=1 00:15:38.383 --rc genhtml_function_coverage=1 00:15:38.383 --rc genhtml_legend=1 00:15:38.383 --rc geninfo_all_blocks=1 00:15:38.383 --rc geninfo_unexecuted_blocks=1 00:15:38.383 00:15:38.383 ' 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:38.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1605467 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1605467' 00:15:38.383 Process pid: 1605467 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1605467 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1605467 ']' 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:38.383 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:39.326 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:39.326 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:39.326 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:40.268 malloc0 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:40.268 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:12.453 Fuzzing completed. Shutting down the fuzz application 00:16:12.453 00:16:12.453 Dumping successful admin opcodes: 00:16:12.453 8, 9, 10, 24, 00:16:12.453 Dumping successful io opcodes: 00:16:12.453 0, 00:16:12.453 NS: 0x20000081ef00 I/O qp, Total commands completed: 1144612, total successful commands: 4507, random_seed: 2702902080 00:16:12.453 NS: 0x20000081ef00 admin qp, Total commands completed: 143916, total successful commands: 1170, random_seed: 1438118912 00:16:12.453 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:12.453 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.453 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:12.453 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.453 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1605467 00:16:12.453 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1605467 ']' 00:16:12.453 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1605467 00:16:12.453 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:12.453 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:12.453 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1605467 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1605467' 00:16:12.453 killing process with pid 1605467 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1605467 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1605467 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:12.453 00:16:12.453 real 0m33.773s 00:16:12.453 user 0m38.199s 00:16:12.453 sys 0m26.315s 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:12.453 ************************************ 00:16:12.453 END TEST nvmf_vfio_user_fuzz 00:16:12.453 ************************************ 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:12.453 ************************************ 00:16:12.453 START TEST nvmf_auth_target 00:16:12.453 ************************************ 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:12.453 * Looking for test storage... 00:16:12.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:12.453 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:12.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.454 --rc genhtml_branch_coverage=1 00:16:12.454 --rc genhtml_function_coverage=1 00:16:12.454 --rc genhtml_legend=1 00:16:12.454 --rc geninfo_all_blocks=1 00:16:12.454 --rc geninfo_unexecuted_blocks=1 00:16:12.454 00:16:12.454 ' 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:12.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.454 --rc genhtml_branch_coverage=1 00:16:12.454 --rc genhtml_function_coverage=1 00:16:12.454 --rc genhtml_legend=1 00:16:12.454 --rc geninfo_all_blocks=1 00:16:12.454 --rc geninfo_unexecuted_blocks=1 00:16:12.454 00:16:12.454 ' 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:12.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.454 --rc genhtml_branch_coverage=1 00:16:12.454 --rc genhtml_function_coverage=1 00:16:12.454 --rc genhtml_legend=1 00:16:12.454 --rc geninfo_all_blocks=1 00:16:12.454 --rc geninfo_unexecuted_blocks=1 00:16:12.454 00:16:12.454 ' 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:12.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.454 --rc genhtml_branch_coverage=1 00:16:12.454 --rc genhtml_function_coverage=1 00:16:12.454 --rc genhtml_legend=1 00:16:12.454 --rc geninfo_all_blocks=1 00:16:12.454 --rc geninfo_unexecuted_blocks=1 00:16:12.454 00:16:12.454 ' 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:12.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:12.454 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.455 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:12.455 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.455 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:12.455 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:12.455 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:12.455 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:19.050 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:19.050 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:19.050 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:19.050 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:19.050 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:19.051 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:19.051 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:19.051 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:19.051 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:19.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:16:19.313 00:16:19.313 --- 10.0.0.2 ping statistics --- 00:16:19.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.313 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:19.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:16:19.313 00:16:19.313 --- 10.0.0.1 ping statistics --- 00:16:19.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.313 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1615678 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1615678 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1615678 ']' 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:19.313 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1615900 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=a6057a8974c33f3026644480436443c180e27c00edf9b580 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.CDS 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key a6057a8974c33f3026644480436443c180e27c00edf9b580 0 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 a6057a8974c33f3026644480436443c180e27c00edf9b580 0 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=a6057a8974c33f3026644480436443c180e27c00edf9b580 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.CDS 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.CDS 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.CDS 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=116cec8898ddb1b7b93d03f5d032722c58b86331e7f5307232a213766015c327 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.0kV 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 116cec8898ddb1b7b93d03f5d032722c58b86331e7f5307232a213766015c327 3 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 116cec8898ddb1b7b93d03f5d032722c58b86331e7f5307232a213766015c327 3 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=116cec8898ddb1b7b93d03f5d032722c58b86331e7f5307232a213766015c327 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:20.259 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.0kV 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.0kV 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.0kV 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=18fc11ef5cab16b4e1f128c18620cb84 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.gUT 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 18fc11ef5cab16b4e1f128c18620cb84 1 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 18fc11ef5cab16b4e1f128c18620cb84 1 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=18fc11ef5cab16b4e1f128c18620cb84 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.gUT 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.gUT 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.gUT 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:20.521 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=26e2cd8e556c9dbff8562279b7c952da45506232a549bf15 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.RTx 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 26e2cd8e556c9dbff8562279b7c952da45506232a549bf15 2 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 26e2cd8e556c9dbff8562279b7c952da45506232a549bf15 2 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=26e2cd8e556c9dbff8562279b7c952da45506232a549bf15 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.RTx 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.RTx 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.RTx 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=cb48cae506afad1bf6e50abfc8c397a9d4b1642182900784 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.5Kj 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key cb48cae506afad1bf6e50abfc8c397a9d4b1642182900784 2 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 cb48cae506afad1bf6e50abfc8c397a9d4b1642182900784 2 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=cb48cae506afad1bf6e50abfc8c397a9d4b1642182900784 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:20.522 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.5Kj 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.5Kj 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.5Kj 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=598abe149dea4a8c3e38caf191e99001 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.L1m 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 598abe149dea4a8c3e38caf191e99001 1 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 598abe149dea4a8c3e38caf191e99001 1 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=598abe149dea4a8c3e38caf191e99001 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:20.522 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.L1m 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.L1m 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.L1m 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=76ddf02d3fb4ebbb4b29a0b67f59f06e2dc3e3767d2f9966f43b7cdd8067a90a 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.qCf 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 76ddf02d3fb4ebbb4b29a0b67f59f06e2dc3e3767d2f9966f43b7cdd8067a90a 3 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 76ddf02d3fb4ebbb4b29a0b67f59f06e2dc3e3767d2f9966f43b7cdd8067a90a 3 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=76ddf02d3fb4ebbb4b29a0b67f59f06e2dc3e3767d2f9966f43b7cdd8067a90a 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.qCf 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.qCf 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.qCf 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1615678 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1615678 ']' 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1615900 /var/tmp/host.sock 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1615900 ']' 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:20.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:20.784 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.046 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:21.046 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:21.046 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:21.046 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.046 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.046 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.046 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:21.046 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CDS 00:16:21.046 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.046 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.046 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.046 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.CDS 00:16:21.046 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.CDS 00:16:21.307 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.0kV ]] 00:16:21.307 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0kV 00:16:21.307 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.307 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.307 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.307 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0kV 00:16:21.307 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0kV 00:16:21.569 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:21.569 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.gUT 00:16:21.569 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.569 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.569 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.569 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.gUT 00:16:21.569 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.gUT 00:16:21.569 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.RTx ]] 00:16:21.569 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RTx 00:16:21.569 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.569 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.569 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.569 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RTx 00:16:21.569 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RTx 00:16:21.829 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:21.829 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5Kj 00:16:21.829 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.829 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.829 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.829 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.5Kj 00:16:21.829 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.5Kj 00:16:22.090 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.L1m ]] 00:16:22.090 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L1m 00:16:22.090 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.090 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.090 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.090 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L1m 00:16:22.090 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L1m 00:16:22.090 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:22.090 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.qCf 00:16:22.090 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.090 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.090 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.090 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.qCf 00:16:22.090 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.qCf 00:16:22.351 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:22.351 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:22.351 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.351 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.351 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:22.351 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:22.351 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:22.351 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.351 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:22.351 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:22.351 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:22.351 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.351 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.351 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.351 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.351 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.351 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.351 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.351 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.611 00:16:22.611 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.611 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.611 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.872 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.872 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.872 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.872 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.872 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.873 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.873 { 00:16:22.873 "cntlid": 1, 00:16:22.873 "qid": 0, 00:16:22.873 "state": "enabled", 00:16:22.873 "thread": "nvmf_tgt_poll_group_000", 00:16:22.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:22.873 "listen_address": { 00:16:22.873 "trtype": "TCP", 00:16:22.873 "adrfam": "IPv4", 00:16:22.873 "traddr": "10.0.0.2", 00:16:22.873 "trsvcid": "4420" 00:16:22.873 }, 00:16:22.873 "peer_address": { 00:16:22.873 "trtype": "TCP", 00:16:22.873 "adrfam": "IPv4", 00:16:22.873 "traddr": "10.0.0.1", 00:16:22.873 "trsvcid": "36082" 00:16:22.873 }, 00:16:22.873 "auth": { 00:16:22.873 "state": "completed", 00:16:22.873 "digest": "sha256", 00:16:22.873 "dhgroup": "null" 00:16:22.873 } 00:16:22.873 } 00:16:22.873 ]' 00:16:22.873 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.873 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.873 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.873 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:22.873 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.873 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.873 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.873 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.134 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:16:23.134 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:16:24.076 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.076 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:24.076 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.076 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.076 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.076 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.076 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:24.076 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:24.076 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:24.076 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.076 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:24.076 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:24.076 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:24.076 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.076 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.077 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.077 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.077 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.077 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.077 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.077 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.338 00:16:24.338 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.338 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.338 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.599 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.599 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.599 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.599 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.599 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.600 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.600 { 00:16:24.600 "cntlid": 3, 00:16:24.600 "qid": 0, 00:16:24.600 "state": "enabled", 00:16:24.600 "thread": "nvmf_tgt_poll_group_000", 00:16:24.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:24.600 "listen_address": { 00:16:24.600 "trtype": "TCP", 00:16:24.600 "adrfam": "IPv4", 00:16:24.600 "traddr": "10.0.0.2", 00:16:24.600 "trsvcid": "4420" 00:16:24.600 }, 00:16:24.600 "peer_address": { 00:16:24.600 "trtype": "TCP", 00:16:24.600 "adrfam": "IPv4", 00:16:24.600 "traddr": "10.0.0.1", 00:16:24.600 "trsvcid": "35274" 00:16:24.600 }, 00:16:24.600 "auth": { 00:16:24.600 "state": "completed", 00:16:24.600 "digest": "sha256", 00:16:24.600 "dhgroup": "null" 00:16:24.600 } 00:16:24.600 } 00:16:24.600 ]' 00:16:24.600 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.600 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.600 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.600 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:24.600 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.600 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.600 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.600 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.861 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:16:24.861 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.811 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.073 00:16:26.073 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.073 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.073 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.073 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.333 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.333 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.333 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.333 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.333 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.333 { 00:16:26.333 "cntlid": 5, 00:16:26.333 "qid": 0, 00:16:26.333 "state": "enabled", 00:16:26.333 "thread": "nvmf_tgt_poll_group_000", 00:16:26.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:26.333 "listen_address": { 00:16:26.333 "trtype": "TCP", 00:16:26.333 "adrfam": "IPv4", 00:16:26.333 "traddr": "10.0.0.2", 00:16:26.333 "trsvcid": "4420" 00:16:26.333 }, 00:16:26.333 "peer_address": { 00:16:26.333 "trtype": "TCP", 00:16:26.333 "adrfam": "IPv4", 00:16:26.333 "traddr": "10.0.0.1", 00:16:26.333 "trsvcid": "35290" 00:16:26.333 }, 00:16:26.334 "auth": { 00:16:26.334 "state": "completed", 00:16:26.334 "digest": "sha256", 00:16:26.334 "dhgroup": "null" 00:16:26.334 } 00:16:26.334 } 00:16:26.334 ]' 00:16:26.334 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.334 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.334 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.334 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:26.334 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.334 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.334 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.334 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.595 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:16:26.595 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:16:27.169 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.169 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:27.169 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.169 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.430 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.430 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.430 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.430 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.430 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:27.430 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.430 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.430 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:27.430 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:27.430 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.430 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:27.430 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.430 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.430 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.430 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:27.430 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.430 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.690 00:16:27.690 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.690 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.690 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.950 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.950 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.950 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.950 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.950 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.950 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.950 { 00:16:27.950 "cntlid": 7, 00:16:27.950 "qid": 0, 00:16:27.950 "state": "enabled", 00:16:27.950 "thread": "nvmf_tgt_poll_group_000", 00:16:27.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:27.950 "listen_address": { 00:16:27.950 "trtype": "TCP", 00:16:27.950 "adrfam": "IPv4", 00:16:27.950 "traddr": "10.0.0.2", 00:16:27.950 "trsvcid": "4420" 00:16:27.950 }, 00:16:27.950 "peer_address": { 00:16:27.950 "trtype": "TCP", 00:16:27.950 "adrfam": "IPv4", 00:16:27.950 "traddr": "10.0.0.1", 00:16:27.950 "trsvcid": "35310" 00:16:27.950 }, 00:16:27.950 "auth": { 00:16:27.950 "state": "completed", 00:16:27.950 "digest": "sha256", 00:16:27.950 "dhgroup": "null" 00:16:27.950 } 00:16:27.950 } 00:16:27.950 ]' 00:16:27.950 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.950 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.951 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.951 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:27.951 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.951 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.951 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.951 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.212 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:16:28.212 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.154 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.415 00:16:29.415 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.415 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.415 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.676 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.676 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.676 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.676 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.676 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.676 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.676 { 00:16:29.676 "cntlid": 9, 00:16:29.676 "qid": 0, 00:16:29.676 "state": "enabled", 00:16:29.676 "thread": "nvmf_tgt_poll_group_000", 00:16:29.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:29.676 "listen_address": { 00:16:29.676 "trtype": "TCP", 00:16:29.676 "adrfam": "IPv4", 00:16:29.676 "traddr": "10.0.0.2", 00:16:29.676 "trsvcid": "4420" 00:16:29.676 }, 00:16:29.676 "peer_address": { 00:16:29.676 "trtype": "TCP", 00:16:29.676 "adrfam": "IPv4", 00:16:29.676 "traddr": "10.0.0.1", 00:16:29.676 "trsvcid": "35352" 00:16:29.676 }, 00:16:29.676 "auth": { 00:16:29.676 "state": "completed", 00:16:29.676 "digest": "sha256", 00:16:29.676 "dhgroup": "ffdhe2048" 00:16:29.676 } 00:16:29.676 } 00:16:29.676 ]' 00:16:29.676 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.676 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.676 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.676 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.676 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.676 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.676 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.676 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.936 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:16:29.936 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.877 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.138 00:16:31.139 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.139 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.139 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.139 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.139 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.139 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.139 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.139 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.139 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.139 { 00:16:31.139 "cntlid": 11, 00:16:31.139 "qid": 0, 00:16:31.139 "state": "enabled", 00:16:31.139 "thread": "nvmf_tgt_poll_group_000", 00:16:31.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:31.139 "listen_address": { 00:16:31.139 "trtype": "TCP", 00:16:31.139 "adrfam": "IPv4", 00:16:31.139 "traddr": "10.0.0.2", 00:16:31.139 "trsvcid": "4420" 00:16:31.139 }, 00:16:31.139 "peer_address": { 00:16:31.139 "trtype": "TCP", 00:16:31.139 "adrfam": "IPv4", 00:16:31.139 "traddr": "10.0.0.1", 00:16:31.139 "trsvcid": "35374" 00:16:31.139 }, 00:16:31.139 "auth": { 00:16:31.139 "state": "completed", 00:16:31.139 "digest": "sha256", 00:16:31.139 "dhgroup": "ffdhe2048" 00:16:31.139 } 00:16:31.139 } 00:16:31.139 ]' 00:16:31.139 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.399 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.399 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.399 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.399 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.399 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.399 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.399 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.660 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:16:31.660 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:16:32.231 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.231 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:32.231 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.231 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.231 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.231 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.231 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.231 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.491 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:32.491 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.491 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.491 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:32.491 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:32.491 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.491 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.491 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.491 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.491 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.491 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.491 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.491 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.751 00:16:32.751 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.751 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.751 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.011 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.011 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.011 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.011 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.011 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.011 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.011 { 00:16:33.011 "cntlid": 13, 00:16:33.011 "qid": 0, 00:16:33.011 "state": "enabled", 00:16:33.011 "thread": "nvmf_tgt_poll_group_000", 00:16:33.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:33.011 "listen_address": { 00:16:33.011 "trtype": "TCP", 00:16:33.011 "adrfam": "IPv4", 00:16:33.011 "traddr": "10.0.0.2", 00:16:33.011 "trsvcid": "4420" 00:16:33.011 }, 00:16:33.011 "peer_address": { 00:16:33.011 "trtype": "TCP", 00:16:33.011 "adrfam": "IPv4", 00:16:33.011 "traddr": "10.0.0.1", 00:16:33.011 "trsvcid": "35410" 00:16:33.011 }, 00:16:33.011 "auth": { 00:16:33.011 "state": "completed", 00:16:33.011 "digest": "sha256", 00:16:33.011 "dhgroup": "ffdhe2048" 00:16:33.011 } 00:16:33.011 } 00:16:33.011 ]' 00:16:33.011 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.011 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.011 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.011 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.011 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.011 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.011 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.011 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.271 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:16:33.271 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.212 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.473 00:16:34.473 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.473 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.473 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.734 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.734 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.734 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.734 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.734 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.734 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.734 { 00:16:34.734 "cntlid": 15, 00:16:34.734 "qid": 0, 00:16:34.734 "state": "enabled", 00:16:34.734 "thread": "nvmf_tgt_poll_group_000", 00:16:34.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:34.734 "listen_address": { 00:16:34.734 "trtype": "TCP", 00:16:34.734 "adrfam": "IPv4", 00:16:34.734 "traddr": "10.0.0.2", 00:16:34.734 "trsvcid": "4420" 00:16:34.734 }, 00:16:34.734 "peer_address": { 00:16:34.734 "trtype": "TCP", 00:16:34.734 "adrfam": "IPv4", 00:16:34.734 "traddr": "10.0.0.1", 00:16:34.734 "trsvcid": "38500" 00:16:34.734 }, 00:16:34.734 "auth": { 00:16:34.734 "state": "completed", 00:16:34.734 "digest": "sha256", 00:16:34.734 "dhgroup": "ffdhe2048" 00:16:34.734 } 00:16:34.734 } 00:16:34.734 ]' 00:16:34.734 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.734 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.734 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.734 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.734 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.734 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.734 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.734 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.995 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:16:34.995 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:16:35.578 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.578 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:35.578 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.578 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.578 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.578 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.578 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.578 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.840 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.840 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:35.840 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.840 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.840 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:35.840 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:35.840 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.840 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.840 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.840 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.840 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.840 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.840 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.840 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.100 00:16:36.100 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.100 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.101 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.362 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.362 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.362 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.362 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.362 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.362 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.362 { 00:16:36.362 "cntlid": 17, 00:16:36.362 "qid": 0, 00:16:36.362 "state": "enabled", 00:16:36.362 "thread": "nvmf_tgt_poll_group_000", 00:16:36.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:36.362 "listen_address": { 00:16:36.362 "trtype": "TCP", 00:16:36.362 "adrfam": "IPv4", 00:16:36.362 "traddr": "10.0.0.2", 00:16:36.362 "trsvcid": "4420" 00:16:36.362 }, 00:16:36.362 "peer_address": { 00:16:36.362 "trtype": "TCP", 00:16:36.362 "adrfam": "IPv4", 00:16:36.362 "traddr": "10.0.0.1", 00:16:36.362 "trsvcid": "38528" 00:16:36.362 }, 00:16:36.362 "auth": { 00:16:36.362 "state": "completed", 00:16:36.362 "digest": "sha256", 00:16:36.362 "dhgroup": "ffdhe3072" 00:16:36.362 } 00:16:36.362 } 00:16:36.362 ]' 00:16:36.362 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.362 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.362 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.362 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.362 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.362 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.362 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.362 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.623 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:16:36.623 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:16:37.565 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.565 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:37.565 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.565 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.565 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.565 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.565 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.565 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.565 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:37.565 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.565 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.565 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:37.565 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:37.565 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.565 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.565 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.565 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.565 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.565 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.565 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.565 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.825 00:16:37.825 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.825 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.825 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.086 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.086 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.086 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.086 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.086 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.086 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.086 { 00:16:38.086 "cntlid": 19, 00:16:38.086 "qid": 0, 00:16:38.086 "state": "enabled", 00:16:38.086 "thread": "nvmf_tgt_poll_group_000", 00:16:38.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:38.086 "listen_address": { 00:16:38.086 "trtype": "TCP", 00:16:38.086 "adrfam": "IPv4", 00:16:38.086 "traddr": "10.0.0.2", 00:16:38.086 "trsvcid": "4420" 00:16:38.086 }, 00:16:38.086 "peer_address": { 00:16:38.086 "trtype": "TCP", 00:16:38.086 "adrfam": "IPv4", 00:16:38.086 "traddr": "10.0.0.1", 00:16:38.086 "trsvcid": "38554" 00:16:38.086 }, 00:16:38.086 "auth": { 00:16:38.086 "state": "completed", 00:16:38.086 "digest": "sha256", 00:16:38.086 "dhgroup": "ffdhe3072" 00:16:38.086 } 00:16:38.086 } 00:16:38.086 ]' 00:16:38.086 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.086 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.086 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.086 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.086 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.086 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.086 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.086 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.347 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:16:38.347 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.288 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.549 00:16:39.549 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.549 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.549 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.809 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.809 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.809 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.809 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.809 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.809 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.809 { 00:16:39.809 "cntlid": 21, 00:16:39.809 "qid": 0, 00:16:39.809 "state": "enabled", 00:16:39.809 "thread": "nvmf_tgt_poll_group_000", 00:16:39.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:39.809 "listen_address": { 00:16:39.809 "trtype": "TCP", 00:16:39.809 "adrfam": "IPv4", 00:16:39.809 "traddr": "10.0.0.2", 00:16:39.809 "trsvcid": "4420" 00:16:39.809 }, 00:16:39.809 "peer_address": { 00:16:39.809 "trtype": "TCP", 00:16:39.809 "adrfam": "IPv4", 00:16:39.809 "traddr": "10.0.0.1", 00:16:39.809 "trsvcid": "38594" 00:16:39.809 }, 00:16:39.809 "auth": { 00:16:39.809 "state": "completed", 00:16:39.809 "digest": "sha256", 00:16:39.809 "dhgroup": "ffdhe3072" 00:16:39.809 } 00:16:39.809 } 00:16:39.809 ]' 00:16:39.809 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.809 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.809 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.809 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.809 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.809 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.809 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.809 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.069 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:16:40.069 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:16:41.010 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.010 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:41.010 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.010 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.010 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.010 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.010 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:41.010 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:41.010 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:41.010 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.010 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.010 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:41.010 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:41.010 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.010 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:41.011 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.011 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.011 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.011 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:41.011 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.011 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.271 00:16:41.271 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.271 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.271 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.532 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.532 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.532 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.532 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.532 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.532 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.532 { 00:16:41.532 "cntlid": 23, 00:16:41.532 "qid": 0, 00:16:41.532 "state": "enabled", 00:16:41.532 "thread": "nvmf_tgt_poll_group_000", 00:16:41.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:41.532 "listen_address": { 00:16:41.532 "trtype": "TCP", 00:16:41.532 "adrfam": "IPv4", 00:16:41.532 "traddr": "10.0.0.2", 00:16:41.532 "trsvcid": "4420" 00:16:41.532 }, 00:16:41.532 "peer_address": { 00:16:41.532 "trtype": "TCP", 00:16:41.532 "adrfam": "IPv4", 00:16:41.532 "traddr": "10.0.0.1", 00:16:41.532 "trsvcid": "38614" 00:16:41.532 }, 00:16:41.532 "auth": { 00:16:41.532 "state": "completed", 00:16:41.532 "digest": "sha256", 00:16:41.532 "dhgroup": "ffdhe3072" 00:16:41.532 } 00:16:41.532 } 00:16:41.532 ]' 00:16:41.532 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.532 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.532 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.532 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:41.532 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.793 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:16:41.793 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:16:42.733 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.733 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:42.733 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.733 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.733 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.733 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.733 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.733 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.733 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:42.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:42.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:42.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.733 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.994 00:16:42.994 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.994 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.994 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.255 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.255 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.255 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.255 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.255 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.255 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.255 { 00:16:43.255 "cntlid": 25, 00:16:43.255 "qid": 0, 00:16:43.255 "state": "enabled", 00:16:43.255 "thread": "nvmf_tgt_poll_group_000", 00:16:43.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:43.255 "listen_address": { 00:16:43.255 "trtype": "TCP", 00:16:43.255 "adrfam": "IPv4", 00:16:43.255 "traddr": "10.0.0.2", 00:16:43.255 "trsvcid": "4420" 00:16:43.255 }, 00:16:43.255 "peer_address": { 00:16:43.255 "trtype": "TCP", 00:16:43.255 "adrfam": "IPv4", 00:16:43.255 "traddr": "10.0.0.1", 00:16:43.255 "trsvcid": "38654" 00:16:43.255 }, 00:16:43.255 "auth": { 00:16:43.255 "state": "completed", 00:16:43.255 "digest": "sha256", 00:16:43.255 "dhgroup": "ffdhe4096" 00:16:43.255 } 00:16:43.255 } 00:16:43.255 ]' 00:16:43.255 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.255 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.255 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.255 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.255 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.255 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.255 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.255 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.515 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:16:43.515 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.457 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.718 00:16:44.718 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.718 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.718 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.980 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.980 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.980 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.980 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.980 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.980 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.980 { 00:16:44.980 "cntlid": 27, 00:16:44.980 "qid": 0, 00:16:44.980 "state": "enabled", 00:16:44.980 "thread": "nvmf_tgt_poll_group_000", 00:16:44.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:44.980 "listen_address": { 00:16:44.980 "trtype": "TCP", 00:16:44.980 "adrfam": "IPv4", 00:16:44.980 "traddr": "10.0.0.2", 00:16:44.980 "trsvcid": "4420" 00:16:44.980 }, 00:16:44.980 "peer_address": { 00:16:44.980 "trtype": "TCP", 00:16:44.980 "adrfam": "IPv4", 00:16:44.980 "traddr": "10.0.0.1", 00:16:44.980 "trsvcid": "48310" 00:16:44.980 }, 00:16:44.980 "auth": { 00:16:44.980 "state": "completed", 00:16:44.980 "digest": "sha256", 00:16:44.980 "dhgroup": "ffdhe4096" 00:16:44.980 } 00:16:44.980 } 00:16:44.980 ]' 00:16:44.980 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.980 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.980 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.980 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:44.980 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.980 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.980 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.980 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.241 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:16:45.241 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:16:45.813 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.073 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.334 00:16:46.334 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.334 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.334 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.594 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.595 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.595 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.595 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.595 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.595 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.595 { 00:16:46.595 "cntlid": 29, 00:16:46.595 "qid": 0, 00:16:46.595 "state": "enabled", 00:16:46.595 "thread": "nvmf_tgt_poll_group_000", 00:16:46.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:46.595 "listen_address": { 00:16:46.595 "trtype": "TCP", 00:16:46.595 "adrfam": "IPv4", 00:16:46.595 "traddr": "10.0.0.2", 00:16:46.595 "trsvcid": "4420" 00:16:46.595 }, 00:16:46.595 "peer_address": { 00:16:46.595 "trtype": "TCP", 00:16:46.595 "adrfam": "IPv4", 00:16:46.595 "traddr": "10.0.0.1", 00:16:46.595 "trsvcid": "48324" 00:16:46.595 }, 00:16:46.595 "auth": { 00:16:46.595 "state": "completed", 00:16:46.595 "digest": "sha256", 00:16:46.595 "dhgroup": "ffdhe4096" 00:16:46.595 } 00:16:46.595 } 00:16:46.595 ]' 00:16:46.595 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.595 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.595 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.595 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.595 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.855 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.855 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.855 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.855 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:16:46.855 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:16:47.796 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.796 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:47.796 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.796 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.796 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.796 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.796 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:47.796 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:47.796 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:47.796 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.796 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.796 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:47.796 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:47.796 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.796 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:47.796 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.796 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.796 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.797 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:47.797 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.797 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.057 00:16:48.057 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.057 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.058 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.318 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.318 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.318 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.318 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.318 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.318 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.318 { 00:16:48.318 "cntlid": 31, 00:16:48.318 "qid": 0, 00:16:48.318 "state": "enabled", 00:16:48.318 "thread": "nvmf_tgt_poll_group_000", 00:16:48.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:48.318 "listen_address": { 00:16:48.318 "trtype": "TCP", 00:16:48.318 "adrfam": "IPv4", 00:16:48.318 "traddr": "10.0.0.2", 00:16:48.318 "trsvcid": "4420" 00:16:48.318 }, 00:16:48.318 "peer_address": { 00:16:48.318 "trtype": "TCP", 00:16:48.318 "adrfam": "IPv4", 00:16:48.318 "traddr": "10.0.0.1", 00:16:48.318 "trsvcid": "48364" 00:16:48.318 }, 00:16:48.318 "auth": { 00:16:48.318 "state": "completed", 00:16:48.318 "digest": "sha256", 00:16:48.318 "dhgroup": "ffdhe4096" 00:16:48.318 } 00:16:48.318 } 00:16:48.318 ]' 00:16:48.318 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.318 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.318 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.318 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:48.318 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.318 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.318 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.318 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.579 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:16:48.579 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:16:49.520 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.520 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:49.520 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.520 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.520 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.520 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.520 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.520 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.520 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.520 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:49.520 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.520 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.520 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:49.520 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:49.520 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.520 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.520 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.520 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.520 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.520 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.520 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.520 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.091 00:16:50.091 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.091 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.091 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.091 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.091 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.091 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.091 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.091 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.091 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.091 { 00:16:50.091 "cntlid": 33, 00:16:50.091 "qid": 0, 00:16:50.091 "state": "enabled", 00:16:50.091 "thread": "nvmf_tgt_poll_group_000", 00:16:50.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:50.091 "listen_address": { 00:16:50.091 "trtype": "TCP", 00:16:50.091 "adrfam": "IPv4", 00:16:50.091 "traddr": "10.0.0.2", 00:16:50.091 "trsvcid": "4420" 00:16:50.091 }, 00:16:50.091 "peer_address": { 00:16:50.091 "trtype": "TCP", 00:16:50.091 "adrfam": "IPv4", 00:16:50.091 "traddr": "10.0.0.1", 00:16:50.091 "trsvcid": "48398" 00:16:50.091 }, 00:16:50.091 "auth": { 00:16:50.091 "state": "completed", 00:16:50.091 "digest": "sha256", 00:16:50.091 "dhgroup": "ffdhe6144" 00:16:50.091 } 00:16:50.091 } 00:16:50.091 ]' 00:16:50.091 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.091 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.091 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.091 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:50.091 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.352 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.352 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.352 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.352 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:16:50.352 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:16:51.293 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.293 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:51.293 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.293 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.293 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.293 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.293 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:51.293 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:51.293 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:51.293 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.293 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:51.293 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:51.293 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:51.293 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.293 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.294 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.294 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.294 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.294 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.294 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.294 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.865 00:16:51.865 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.865 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.865 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.865 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.865 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.865 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.865 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.865 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.865 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.865 { 00:16:51.865 "cntlid": 35, 00:16:51.865 "qid": 0, 00:16:51.865 "state": "enabled", 00:16:51.865 "thread": "nvmf_tgt_poll_group_000", 00:16:51.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:51.865 "listen_address": { 00:16:51.865 "trtype": "TCP", 00:16:51.865 "adrfam": "IPv4", 00:16:51.865 "traddr": "10.0.0.2", 00:16:51.865 "trsvcid": "4420" 00:16:51.865 }, 00:16:51.865 "peer_address": { 00:16:51.865 "trtype": "TCP", 00:16:51.865 "adrfam": "IPv4", 00:16:51.865 "traddr": "10.0.0.1", 00:16:51.865 "trsvcid": "48418" 00:16:51.865 }, 00:16:51.865 "auth": { 00:16:51.865 "state": "completed", 00:16:51.866 "digest": "sha256", 00:16:51.866 "dhgroup": "ffdhe6144" 00:16:51.866 } 00:16:51.866 } 00:16:51.866 ]' 00:16:51.866 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.866 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.126 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.126 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.126 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.126 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.126 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.126 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.126 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:16:52.126 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:16:53.067 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.067 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:53.067 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.067 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.067 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.067 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.067 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.067 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.327 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:53.328 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.328 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.328 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:53.328 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:53.328 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.328 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.328 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.328 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.328 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.328 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.328 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.328 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.588 00:16:53.588 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.588 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.588 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.851 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.851 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.851 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.851 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.851 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.851 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.851 { 00:16:53.851 "cntlid": 37, 00:16:53.851 "qid": 0, 00:16:53.851 "state": "enabled", 00:16:53.851 "thread": "nvmf_tgt_poll_group_000", 00:16:53.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:53.851 "listen_address": { 00:16:53.851 "trtype": "TCP", 00:16:53.851 "adrfam": "IPv4", 00:16:53.851 "traddr": "10.0.0.2", 00:16:53.851 "trsvcid": "4420" 00:16:53.851 }, 00:16:53.851 "peer_address": { 00:16:53.851 "trtype": "TCP", 00:16:53.851 "adrfam": "IPv4", 00:16:53.851 "traddr": "10.0.0.1", 00:16:53.851 "trsvcid": "48444" 00:16:53.851 }, 00:16:53.851 "auth": { 00:16:53.851 "state": "completed", 00:16:53.851 "digest": "sha256", 00:16:53.851 "dhgroup": "ffdhe6144" 00:16:53.851 } 00:16:53.851 } 00:16:53.851 ]' 00:16:53.851 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.851 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.851 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.851 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.851 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.851 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.851 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.851 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.112 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:16:54.112 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.053 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.313 00:16:55.313 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.313 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.313 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.574 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.574 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.574 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.574 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.574 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.574 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.574 { 00:16:55.574 "cntlid": 39, 00:16:55.574 "qid": 0, 00:16:55.574 "state": "enabled", 00:16:55.574 "thread": "nvmf_tgt_poll_group_000", 00:16:55.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:55.574 "listen_address": { 00:16:55.574 "trtype": "TCP", 00:16:55.574 "adrfam": "IPv4", 00:16:55.574 "traddr": "10.0.0.2", 00:16:55.574 "trsvcid": "4420" 00:16:55.574 }, 00:16:55.574 "peer_address": { 00:16:55.574 "trtype": "TCP", 00:16:55.574 "adrfam": "IPv4", 00:16:55.574 "traddr": "10.0.0.1", 00:16:55.574 "trsvcid": "55892" 00:16:55.574 }, 00:16:55.574 "auth": { 00:16:55.574 "state": "completed", 00:16:55.574 "digest": "sha256", 00:16:55.574 "dhgroup": "ffdhe6144" 00:16:55.574 } 00:16:55.574 } 00:16:55.574 ]' 00:16:55.574 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.574 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.574 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.574 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.574 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.834 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.834 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.835 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.835 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:16:55.835 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.849 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.850 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.450 00:16:57.450 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.450 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.450 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.451 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.451 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.451 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.451 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.451 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.451 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.451 { 00:16:57.451 "cntlid": 41, 00:16:57.451 "qid": 0, 00:16:57.451 "state": "enabled", 00:16:57.451 "thread": "nvmf_tgt_poll_group_000", 00:16:57.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:57.451 "listen_address": { 00:16:57.451 "trtype": "TCP", 00:16:57.451 "adrfam": "IPv4", 00:16:57.451 "traddr": "10.0.0.2", 00:16:57.451 "trsvcid": "4420" 00:16:57.451 }, 00:16:57.451 "peer_address": { 00:16:57.451 "trtype": "TCP", 00:16:57.451 "adrfam": "IPv4", 00:16:57.451 "traddr": "10.0.0.1", 00:16:57.451 "trsvcid": "55914" 00:16:57.451 }, 00:16:57.451 "auth": { 00:16:57.451 "state": "completed", 00:16:57.451 "digest": "sha256", 00:16:57.451 "dhgroup": "ffdhe8192" 00:16:57.451 } 00:16:57.451 } 00:16:57.451 ]' 00:16:57.451 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.711 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.711 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.711 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.711 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.711 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.711 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.711 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.972 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:16:57.972 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:16:58.543 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.543 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:58.543 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.543 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.543 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.543 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.543 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.543 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.804 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:58.804 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.804 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.804 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.804 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:58.804 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.804 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.804 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.804 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.804 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.804 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.804 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.804 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.375 00:16:59.375 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.375 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.375 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.636 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.636 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.636 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.636 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.636 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.636 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.636 { 00:16:59.636 "cntlid": 43, 00:16:59.636 "qid": 0, 00:16:59.636 "state": "enabled", 00:16:59.636 "thread": "nvmf_tgt_poll_group_000", 00:16:59.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:59.636 "listen_address": { 00:16:59.636 "trtype": "TCP", 00:16:59.636 "adrfam": "IPv4", 00:16:59.636 "traddr": "10.0.0.2", 00:16:59.636 "trsvcid": "4420" 00:16:59.636 }, 00:16:59.636 "peer_address": { 00:16:59.636 "trtype": "TCP", 00:16:59.636 "adrfam": "IPv4", 00:16:59.636 "traddr": "10.0.0.1", 00:16:59.636 "trsvcid": "55928" 00:16:59.636 }, 00:16:59.636 "auth": { 00:16:59.636 "state": "completed", 00:16:59.636 "digest": "sha256", 00:16:59.636 "dhgroup": "ffdhe8192" 00:16:59.636 } 00:16:59.636 } 00:16:59.636 ]' 00:16:59.636 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.636 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.636 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.636 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.636 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.636 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.636 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.636 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.897 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:16:59.897 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.838 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.409 00:17:01.409 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.409 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.409 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.409 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.409 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.409 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.409 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.409 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.409 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.409 { 00:17:01.409 "cntlid": 45, 00:17:01.409 "qid": 0, 00:17:01.409 "state": "enabled", 00:17:01.409 "thread": "nvmf_tgt_poll_group_000", 00:17:01.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:01.409 "listen_address": { 00:17:01.409 "trtype": "TCP", 00:17:01.409 "adrfam": "IPv4", 00:17:01.409 "traddr": "10.0.0.2", 00:17:01.409 "trsvcid": "4420" 00:17:01.409 }, 00:17:01.409 "peer_address": { 00:17:01.409 "trtype": "TCP", 00:17:01.409 "adrfam": "IPv4", 00:17:01.409 "traddr": "10.0.0.1", 00:17:01.409 "trsvcid": "55954" 00:17:01.409 }, 00:17:01.409 "auth": { 00:17:01.409 "state": "completed", 00:17:01.409 "digest": "sha256", 00:17:01.409 "dhgroup": "ffdhe8192" 00:17:01.409 } 00:17:01.409 } 00:17:01.409 ]' 00:17:01.409 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.669 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.669 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.669 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.669 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.669 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.669 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.669 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.929 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:17:01.929 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:17:02.500 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.500 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:02.500 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.500 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.500 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.500 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.500 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:02.500 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:02.761 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:02.761 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.761 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:02.761 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:02.761 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:02.761 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.761 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:02.761 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.761 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.761 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.761 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:02.761 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.761 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.331 00:17:03.331 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.331 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.331 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.591 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.591 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.591 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.591 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.592 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.592 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.592 { 00:17:03.592 "cntlid": 47, 00:17:03.592 "qid": 0, 00:17:03.592 "state": "enabled", 00:17:03.592 "thread": "nvmf_tgt_poll_group_000", 00:17:03.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:03.592 "listen_address": { 00:17:03.592 "trtype": "TCP", 00:17:03.592 "adrfam": "IPv4", 00:17:03.592 "traddr": "10.0.0.2", 00:17:03.592 "trsvcid": "4420" 00:17:03.592 }, 00:17:03.592 "peer_address": { 00:17:03.592 "trtype": "TCP", 00:17:03.592 "adrfam": "IPv4", 00:17:03.592 "traddr": "10.0.0.1", 00:17:03.592 "trsvcid": "55974" 00:17:03.592 }, 00:17:03.592 "auth": { 00:17:03.592 "state": "completed", 00:17:03.592 "digest": "sha256", 00:17:03.592 "dhgroup": "ffdhe8192" 00:17:03.592 } 00:17:03.592 } 00:17:03.592 ]' 00:17:03.592 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.592 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.592 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.592 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:03.592 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.592 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.592 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.592 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.852 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:17:03.852 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:17:04.422 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.683 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.943 00:17:04.943 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.943 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.943 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.204 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.204 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.204 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.204 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.204 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.204 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.204 { 00:17:05.204 "cntlid": 49, 00:17:05.204 "qid": 0, 00:17:05.204 "state": "enabled", 00:17:05.204 "thread": "nvmf_tgt_poll_group_000", 00:17:05.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:05.204 "listen_address": { 00:17:05.204 "trtype": "TCP", 00:17:05.204 "adrfam": "IPv4", 00:17:05.204 "traddr": "10.0.0.2", 00:17:05.204 "trsvcid": "4420" 00:17:05.204 }, 00:17:05.204 "peer_address": { 00:17:05.204 "trtype": "TCP", 00:17:05.204 "adrfam": "IPv4", 00:17:05.204 "traddr": "10.0.0.1", 00:17:05.204 "trsvcid": "49824" 00:17:05.204 }, 00:17:05.204 "auth": { 00:17:05.204 "state": "completed", 00:17:05.204 "digest": "sha384", 00:17:05.204 "dhgroup": "null" 00:17:05.204 } 00:17:05.204 } 00:17:05.204 ]' 00:17:05.204 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.204 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.204 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.204 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:05.204 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.204 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.204 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.204 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.463 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:17:05.463 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.404 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.665 00:17:06.665 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.665 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.665 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.925 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.925 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.925 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.925 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.925 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.925 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.925 { 00:17:06.925 "cntlid": 51, 00:17:06.925 "qid": 0, 00:17:06.925 "state": "enabled", 00:17:06.925 "thread": "nvmf_tgt_poll_group_000", 00:17:06.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:06.925 "listen_address": { 00:17:06.925 "trtype": "TCP", 00:17:06.925 "adrfam": "IPv4", 00:17:06.925 "traddr": "10.0.0.2", 00:17:06.925 "trsvcid": "4420" 00:17:06.925 }, 00:17:06.925 "peer_address": { 00:17:06.925 "trtype": "TCP", 00:17:06.925 "adrfam": "IPv4", 00:17:06.925 "traddr": "10.0.0.1", 00:17:06.925 "trsvcid": "49862" 00:17:06.925 }, 00:17:06.925 "auth": { 00:17:06.925 "state": "completed", 00:17:06.925 "digest": "sha384", 00:17:06.925 "dhgroup": "null" 00:17:06.925 } 00:17:06.925 } 00:17:06.925 ]' 00:17:06.925 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.925 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.925 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.925 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:06.926 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.926 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.926 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.926 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.185 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:17:07.185 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.125 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.386 00:17:08.386 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.386 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.386 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.646 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.646 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.646 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.646 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.646 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.646 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.646 { 00:17:08.646 "cntlid": 53, 00:17:08.646 "qid": 0, 00:17:08.646 "state": "enabled", 00:17:08.646 "thread": "nvmf_tgt_poll_group_000", 00:17:08.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:08.646 "listen_address": { 00:17:08.646 "trtype": "TCP", 00:17:08.646 "adrfam": "IPv4", 00:17:08.646 "traddr": "10.0.0.2", 00:17:08.646 "trsvcid": "4420" 00:17:08.646 }, 00:17:08.646 "peer_address": { 00:17:08.646 "trtype": "TCP", 00:17:08.646 "adrfam": "IPv4", 00:17:08.646 "traddr": "10.0.0.1", 00:17:08.646 "trsvcid": "49892" 00:17:08.646 }, 00:17:08.646 "auth": { 00:17:08.646 "state": "completed", 00:17:08.646 "digest": "sha384", 00:17:08.646 "dhgroup": "null" 00:17:08.646 } 00:17:08.646 } 00:17:08.646 ]' 00:17:08.646 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.647 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.647 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.647 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:08.647 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.647 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.647 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.647 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.908 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:17:08.908 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.848 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.109 00:17:10.109 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.109 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.109 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.370 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.370 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.370 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.370 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.370 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.370 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.370 { 00:17:10.370 "cntlid": 55, 00:17:10.370 "qid": 0, 00:17:10.370 "state": "enabled", 00:17:10.370 "thread": "nvmf_tgt_poll_group_000", 00:17:10.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:10.370 "listen_address": { 00:17:10.370 "trtype": "TCP", 00:17:10.370 "adrfam": "IPv4", 00:17:10.370 "traddr": "10.0.0.2", 00:17:10.370 "trsvcid": "4420" 00:17:10.370 }, 00:17:10.370 "peer_address": { 00:17:10.370 "trtype": "TCP", 00:17:10.370 "adrfam": "IPv4", 00:17:10.370 "traddr": "10.0.0.1", 00:17:10.370 "trsvcid": "49916" 00:17:10.370 }, 00:17:10.370 "auth": { 00:17:10.370 "state": "completed", 00:17:10.370 "digest": "sha384", 00:17:10.370 "dhgroup": "null" 00:17:10.370 } 00:17:10.370 } 00:17:10.370 ]' 00:17:10.370 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.370 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.370 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.370 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:10.370 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.370 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.370 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.370 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.631 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:17:10.631 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:17:11.574 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.574 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.574 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.574 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.574 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.574 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.574 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.574 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.574 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.574 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:11.574 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.574 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.574 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:11.574 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:11.574 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.574 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.574 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.574 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.574 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.574 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.574 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.574 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.835 00:17:11.835 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.835 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.835 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.095 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.095 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.095 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.095 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.095 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.095 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.095 { 00:17:12.095 "cntlid": 57, 00:17:12.095 "qid": 0, 00:17:12.095 "state": "enabled", 00:17:12.095 "thread": "nvmf_tgt_poll_group_000", 00:17:12.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:12.095 "listen_address": { 00:17:12.095 "trtype": "TCP", 00:17:12.095 "adrfam": "IPv4", 00:17:12.095 "traddr": "10.0.0.2", 00:17:12.095 "trsvcid": "4420" 00:17:12.095 }, 00:17:12.095 "peer_address": { 00:17:12.095 "trtype": "TCP", 00:17:12.095 "adrfam": "IPv4", 00:17:12.095 "traddr": "10.0.0.1", 00:17:12.095 "trsvcid": "49958" 00:17:12.095 }, 00:17:12.095 "auth": { 00:17:12.095 "state": "completed", 00:17:12.095 "digest": "sha384", 00:17:12.095 "dhgroup": "ffdhe2048" 00:17:12.095 } 00:17:12.095 } 00:17:12.095 ]' 00:17:12.095 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.095 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.095 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.095 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:12.095 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.095 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.095 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.095 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.356 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:17:12.356 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:17:13.298 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.299 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.559 00:17:13.559 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.559 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.559 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.820 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.820 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.820 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.820 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.820 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.820 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.820 { 00:17:13.820 "cntlid": 59, 00:17:13.820 "qid": 0, 00:17:13.820 "state": "enabled", 00:17:13.820 "thread": "nvmf_tgt_poll_group_000", 00:17:13.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:13.820 "listen_address": { 00:17:13.820 "trtype": "TCP", 00:17:13.820 "adrfam": "IPv4", 00:17:13.820 "traddr": "10.0.0.2", 00:17:13.820 "trsvcid": "4420" 00:17:13.820 }, 00:17:13.820 "peer_address": { 00:17:13.820 "trtype": "TCP", 00:17:13.820 "adrfam": "IPv4", 00:17:13.820 "traddr": "10.0.0.1", 00:17:13.820 "trsvcid": "49992" 00:17:13.820 }, 00:17:13.820 "auth": { 00:17:13.820 "state": "completed", 00:17:13.820 "digest": "sha384", 00:17:13.820 "dhgroup": "ffdhe2048" 00:17:13.820 } 00:17:13.820 } 00:17:13.820 ]' 00:17:13.820 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.820 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.820 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.820 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:13.820 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.820 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.820 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.820 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.082 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:17:14.082 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:17:14.653 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.914 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.176 00:17:15.176 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.176 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.176 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.436 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.436 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.436 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.436 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.436 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.436 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.436 { 00:17:15.436 "cntlid": 61, 00:17:15.436 "qid": 0, 00:17:15.436 "state": "enabled", 00:17:15.436 "thread": "nvmf_tgt_poll_group_000", 00:17:15.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:15.436 "listen_address": { 00:17:15.436 "trtype": "TCP", 00:17:15.436 "adrfam": "IPv4", 00:17:15.436 "traddr": "10.0.0.2", 00:17:15.436 "trsvcid": "4420" 00:17:15.436 }, 00:17:15.436 "peer_address": { 00:17:15.436 "trtype": "TCP", 00:17:15.436 "adrfam": "IPv4", 00:17:15.436 "traddr": "10.0.0.1", 00:17:15.436 "trsvcid": "53482" 00:17:15.436 }, 00:17:15.436 "auth": { 00:17:15.436 "state": "completed", 00:17:15.436 "digest": "sha384", 00:17:15.436 "dhgroup": "ffdhe2048" 00:17:15.436 } 00:17:15.436 } 00:17:15.436 ]' 00:17:15.436 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.436 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.436 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.436 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:15.436 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.696 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.696 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.696 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.696 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:17:15.696 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:17:16.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.640 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.640 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.640 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.640 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.640 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:16.640 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.640 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.640 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:16.640 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.640 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.640 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:16.640 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.640 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.640 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.640 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.640 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.640 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.900 00:17:16.900 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.900 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.900 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.162 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.162 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.162 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.162 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.162 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.162 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.162 { 00:17:17.162 "cntlid": 63, 00:17:17.162 "qid": 0, 00:17:17.162 "state": "enabled", 00:17:17.162 "thread": "nvmf_tgt_poll_group_000", 00:17:17.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:17.162 "listen_address": { 00:17:17.162 "trtype": "TCP", 00:17:17.162 "adrfam": "IPv4", 00:17:17.162 "traddr": "10.0.0.2", 00:17:17.162 "trsvcid": "4420" 00:17:17.162 }, 00:17:17.162 "peer_address": { 00:17:17.162 "trtype": "TCP", 00:17:17.162 "adrfam": "IPv4", 00:17:17.162 "traddr": "10.0.0.1", 00:17:17.162 "trsvcid": "53502" 00:17:17.162 }, 00:17:17.162 "auth": { 00:17:17.162 "state": "completed", 00:17:17.162 "digest": "sha384", 00:17:17.162 "dhgroup": "ffdhe2048" 00:17:17.162 } 00:17:17.162 } 00:17:17.162 ]' 00:17:17.162 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.162 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.162 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.162 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:17.162 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.162 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.162 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.162 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.423 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:17:17.423 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:17:18.366 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.366 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.366 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.366 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.366 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.366 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.366 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.366 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.367 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.367 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:18.367 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.367 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.367 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:18.367 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.367 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.367 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.367 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.367 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.367 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.367 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.367 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.367 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.627 00:17:18.627 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.627 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.628 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.888 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.888 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.888 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.888 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.888 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.888 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.888 { 00:17:18.888 "cntlid": 65, 00:17:18.888 "qid": 0, 00:17:18.888 "state": "enabled", 00:17:18.888 "thread": "nvmf_tgt_poll_group_000", 00:17:18.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:18.888 "listen_address": { 00:17:18.888 "trtype": "TCP", 00:17:18.888 "adrfam": "IPv4", 00:17:18.888 "traddr": "10.0.0.2", 00:17:18.888 "trsvcid": "4420" 00:17:18.888 }, 00:17:18.888 "peer_address": { 00:17:18.888 "trtype": "TCP", 00:17:18.888 "adrfam": "IPv4", 00:17:18.888 "traddr": "10.0.0.1", 00:17:18.888 "trsvcid": "53526" 00:17:18.888 }, 00:17:18.889 "auth": { 00:17:18.889 "state": "completed", 00:17:18.889 "digest": "sha384", 00:17:18.889 "dhgroup": "ffdhe3072" 00:17:18.889 } 00:17:18.889 } 00:17:18.889 ]' 00:17:18.889 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.889 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.889 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.889 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:18.889 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.889 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.889 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.889 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.150 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:17:19.150 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:17:20.093 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.093 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.093 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.093 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.093 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.094 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.094 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.094 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.094 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:20.094 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.094 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.094 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:20.094 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:20.094 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.094 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.094 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.094 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.094 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.094 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.094 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.094 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.354 00:17:20.355 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.355 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.355 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.615 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.615 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.615 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.615 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.615 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.615 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.615 { 00:17:20.615 "cntlid": 67, 00:17:20.615 "qid": 0, 00:17:20.615 "state": "enabled", 00:17:20.615 "thread": "nvmf_tgt_poll_group_000", 00:17:20.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:20.615 "listen_address": { 00:17:20.615 "trtype": "TCP", 00:17:20.615 "adrfam": "IPv4", 00:17:20.615 "traddr": "10.0.0.2", 00:17:20.615 "trsvcid": "4420" 00:17:20.615 }, 00:17:20.615 "peer_address": { 00:17:20.615 "trtype": "TCP", 00:17:20.615 "adrfam": "IPv4", 00:17:20.615 "traddr": "10.0.0.1", 00:17:20.615 "trsvcid": "53556" 00:17:20.615 }, 00:17:20.615 "auth": { 00:17:20.615 "state": "completed", 00:17:20.615 "digest": "sha384", 00:17:20.615 "dhgroup": "ffdhe3072" 00:17:20.615 } 00:17:20.615 } 00:17:20.615 ]' 00:17:20.615 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.615 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.615 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.615 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.615 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.615 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.615 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.615 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.876 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:17:20.876 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.818 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.079 00:17:22.079 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.079 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.079 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.079 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.079 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.079 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.079 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.339 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.339 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.339 { 00:17:22.339 "cntlid": 69, 00:17:22.339 "qid": 0, 00:17:22.339 "state": "enabled", 00:17:22.339 "thread": "nvmf_tgt_poll_group_000", 00:17:22.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:22.339 "listen_address": { 00:17:22.339 "trtype": "TCP", 00:17:22.339 "adrfam": "IPv4", 00:17:22.339 "traddr": "10.0.0.2", 00:17:22.339 "trsvcid": "4420" 00:17:22.339 }, 00:17:22.339 "peer_address": { 00:17:22.339 "trtype": "TCP", 00:17:22.339 "adrfam": "IPv4", 00:17:22.339 "traddr": "10.0.0.1", 00:17:22.339 "trsvcid": "53582" 00:17:22.339 }, 00:17:22.339 "auth": { 00:17:22.339 "state": "completed", 00:17:22.339 "digest": "sha384", 00:17:22.339 "dhgroup": "ffdhe3072" 00:17:22.339 } 00:17:22.339 } 00:17:22.339 ]' 00:17:22.339 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.339 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.339 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.339 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:22.339 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.339 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.340 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.340 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.601 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:17:22.601 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:17:23.172 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.433 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.694 00:17:23.694 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.694 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.694 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.955 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.955 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.955 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.955 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.955 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.955 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.955 { 00:17:23.955 "cntlid": 71, 00:17:23.955 "qid": 0, 00:17:23.955 "state": "enabled", 00:17:23.955 "thread": "nvmf_tgt_poll_group_000", 00:17:23.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:23.955 "listen_address": { 00:17:23.955 "trtype": "TCP", 00:17:23.955 "adrfam": "IPv4", 00:17:23.955 "traddr": "10.0.0.2", 00:17:23.955 "trsvcid": "4420" 00:17:23.955 }, 00:17:23.955 "peer_address": { 00:17:23.955 "trtype": "TCP", 00:17:23.955 "adrfam": "IPv4", 00:17:23.955 "traddr": "10.0.0.1", 00:17:23.955 "trsvcid": "33864" 00:17:23.955 }, 00:17:23.955 "auth": { 00:17:23.955 "state": "completed", 00:17:23.955 "digest": "sha384", 00:17:23.955 "dhgroup": "ffdhe3072" 00:17:23.955 } 00:17:23.955 } 00:17:23.955 ]' 00:17:23.955 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.955 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.955 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.955 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:23.955 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.955 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.955 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.955 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.216 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:17:24.216 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.210 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.471 00:17:25.471 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.471 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.471 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.731 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.731 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.731 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.731 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.731 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.731 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.731 { 00:17:25.731 "cntlid": 73, 00:17:25.731 "qid": 0, 00:17:25.731 "state": "enabled", 00:17:25.731 "thread": "nvmf_tgt_poll_group_000", 00:17:25.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:25.731 "listen_address": { 00:17:25.731 "trtype": "TCP", 00:17:25.731 "adrfam": "IPv4", 00:17:25.731 "traddr": "10.0.0.2", 00:17:25.731 "trsvcid": "4420" 00:17:25.731 }, 00:17:25.731 "peer_address": { 00:17:25.731 "trtype": "TCP", 00:17:25.731 "adrfam": "IPv4", 00:17:25.731 "traddr": "10.0.0.1", 00:17:25.731 "trsvcid": "33904" 00:17:25.731 }, 00:17:25.731 "auth": { 00:17:25.731 "state": "completed", 00:17:25.731 "digest": "sha384", 00:17:25.731 "dhgroup": "ffdhe4096" 00:17:25.731 } 00:17:25.731 } 00:17:25.731 ]' 00:17:25.731 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.731 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.731 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.731 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.731 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.731 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.731 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.731 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.990 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:17:25.990 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.929 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.189 00:17:27.189 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.189 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.189 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.449 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.449 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.449 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.449 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.449 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.449 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.449 { 00:17:27.449 "cntlid": 75, 00:17:27.449 "qid": 0, 00:17:27.449 "state": "enabled", 00:17:27.449 "thread": "nvmf_tgt_poll_group_000", 00:17:27.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:27.449 "listen_address": { 00:17:27.449 "trtype": "TCP", 00:17:27.449 "adrfam": "IPv4", 00:17:27.449 "traddr": "10.0.0.2", 00:17:27.449 "trsvcid": "4420" 00:17:27.449 }, 00:17:27.449 "peer_address": { 00:17:27.449 "trtype": "TCP", 00:17:27.449 "adrfam": "IPv4", 00:17:27.449 "traddr": "10.0.0.1", 00:17:27.449 "trsvcid": "33938" 00:17:27.449 }, 00:17:27.449 "auth": { 00:17:27.449 "state": "completed", 00:17:27.449 "digest": "sha384", 00:17:27.449 "dhgroup": "ffdhe4096" 00:17:27.449 } 00:17:27.449 } 00:17:27.449 ]' 00:17:27.449 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.449 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.449 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.449 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:27.449 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.449 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.449 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.449 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.708 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:17:27.708 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:17:28.648 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.648 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.648 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.648 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.648 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.648 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.648 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.648 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.648 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:28.648 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.648 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.648 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:28.648 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:28.648 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.648 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.648 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.648 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.648 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.649 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.649 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.649 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.908 00:17:28.908 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.908 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.908 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.168 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.168 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.168 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.168 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.168 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.168 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.168 { 00:17:29.168 "cntlid": 77, 00:17:29.168 "qid": 0, 00:17:29.168 "state": "enabled", 00:17:29.168 "thread": "nvmf_tgt_poll_group_000", 00:17:29.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:29.168 "listen_address": { 00:17:29.168 "trtype": "TCP", 00:17:29.168 "adrfam": "IPv4", 00:17:29.168 "traddr": "10.0.0.2", 00:17:29.168 "trsvcid": "4420" 00:17:29.168 }, 00:17:29.168 "peer_address": { 00:17:29.168 "trtype": "TCP", 00:17:29.168 "adrfam": "IPv4", 00:17:29.168 "traddr": "10.0.0.1", 00:17:29.168 "trsvcid": "33962" 00:17:29.168 }, 00:17:29.168 "auth": { 00:17:29.168 "state": "completed", 00:17:29.168 "digest": "sha384", 00:17:29.168 "dhgroup": "ffdhe4096" 00:17:29.168 } 00:17:29.168 } 00:17:29.168 ]' 00:17:29.168 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.168 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.168 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.169 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:29.169 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.169 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.169 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.169 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.428 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:17:29.428 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:17:30.366 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.366 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.366 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.366 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.366 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.366 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.366 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:30.366 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:30.366 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:30.366 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.366 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.366 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:30.366 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:30.366 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.366 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:30.367 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.367 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.367 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.367 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:30.367 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.367 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.626 00:17:30.626 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.626 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.626 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.886 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.886 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.886 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.886 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.886 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.886 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.886 { 00:17:30.886 "cntlid": 79, 00:17:30.886 "qid": 0, 00:17:30.886 "state": "enabled", 00:17:30.886 "thread": "nvmf_tgt_poll_group_000", 00:17:30.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:30.886 "listen_address": { 00:17:30.886 "trtype": "TCP", 00:17:30.886 "adrfam": "IPv4", 00:17:30.886 "traddr": "10.0.0.2", 00:17:30.886 "trsvcid": "4420" 00:17:30.886 }, 00:17:30.886 "peer_address": { 00:17:30.886 "trtype": "TCP", 00:17:30.886 "adrfam": "IPv4", 00:17:30.886 "traddr": "10.0.0.1", 00:17:30.886 "trsvcid": "33980" 00:17:30.886 }, 00:17:30.886 "auth": { 00:17:30.886 "state": "completed", 00:17:30.886 "digest": "sha384", 00:17:30.886 "dhgroup": "ffdhe4096" 00:17:30.886 } 00:17:30.886 } 00:17:30.886 ]' 00:17:30.886 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.886 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.886 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.886 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:30.886 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.886 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.886 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.886 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.146 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:17:31.146 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:17:32.086 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.086 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.086 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.086 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.086 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.086 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.086 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.086 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.086 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.087 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:32.087 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.087 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.087 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:32.087 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:32.087 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.087 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.087 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.087 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.087 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.087 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.087 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.087 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.347 00:17:32.607 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.608 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.608 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.608 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.608 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.608 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.608 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.608 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.608 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.608 { 00:17:32.608 "cntlid": 81, 00:17:32.608 "qid": 0, 00:17:32.608 "state": "enabled", 00:17:32.608 "thread": "nvmf_tgt_poll_group_000", 00:17:32.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:32.608 "listen_address": { 00:17:32.608 "trtype": "TCP", 00:17:32.608 "adrfam": "IPv4", 00:17:32.608 "traddr": "10.0.0.2", 00:17:32.608 "trsvcid": "4420" 00:17:32.608 }, 00:17:32.608 "peer_address": { 00:17:32.608 "trtype": "TCP", 00:17:32.608 "adrfam": "IPv4", 00:17:32.608 "traddr": "10.0.0.1", 00:17:32.608 "trsvcid": "34002" 00:17:32.608 }, 00:17:32.608 "auth": { 00:17:32.608 "state": "completed", 00:17:32.608 "digest": "sha384", 00:17:32.608 "dhgroup": "ffdhe6144" 00:17:32.608 } 00:17:32.608 } 00:17:32.608 ]' 00:17:32.608 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.608 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.608 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.869 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:32.869 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.869 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.869 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.869 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.869 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:17:32.869 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:17:33.809 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.809 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.809 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.809 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.809 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.809 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.809 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.809 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.809 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:33.809 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.809 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.809 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:33.809 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:33.809 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.809 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.809 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.809 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.070 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.070 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.070 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.070 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.330 00:17:34.330 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.330 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.330 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.603 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.603 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.603 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.603 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.603 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.603 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.603 { 00:17:34.603 "cntlid": 83, 00:17:34.603 "qid": 0, 00:17:34.603 "state": "enabled", 00:17:34.603 "thread": "nvmf_tgt_poll_group_000", 00:17:34.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.603 "listen_address": { 00:17:34.603 "trtype": "TCP", 00:17:34.603 "adrfam": "IPv4", 00:17:34.603 "traddr": "10.0.0.2", 00:17:34.603 "trsvcid": "4420" 00:17:34.603 }, 00:17:34.603 "peer_address": { 00:17:34.603 "trtype": "TCP", 00:17:34.603 "adrfam": "IPv4", 00:17:34.603 "traddr": "10.0.0.1", 00:17:34.603 "trsvcid": "54520" 00:17:34.603 }, 00:17:34.603 "auth": { 00:17:34.603 "state": "completed", 00:17:34.603 "digest": "sha384", 00:17:34.603 "dhgroup": "ffdhe6144" 00:17:34.603 } 00:17:34.603 } 00:17:34.603 ]' 00:17:34.603 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.603 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.603 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.603 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.603 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.603 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.603 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.603 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.918 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:17:34.918 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:17:35.581 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.581 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.581 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.581 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.581 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.581 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.581 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:35.581 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:35.839 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:35.839 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.840 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.840 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:35.840 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:35.840 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.840 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.840 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.840 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.840 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.840 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.840 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.840 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.098 00:17:36.098 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.098 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.098 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.358 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.358 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.358 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.358 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.358 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.358 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.358 { 00:17:36.358 "cntlid": 85, 00:17:36.358 "qid": 0, 00:17:36.358 "state": "enabled", 00:17:36.358 "thread": "nvmf_tgt_poll_group_000", 00:17:36.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:36.358 "listen_address": { 00:17:36.358 "trtype": "TCP", 00:17:36.358 "adrfam": "IPv4", 00:17:36.358 "traddr": "10.0.0.2", 00:17:36.358 "trsvcid": "4420" 00:17:36.358 }, 00:17:36.358 "peer_address": { 00:17:36.358 "trtype": "TCP", 00:17:36.358 "adrfam": "IPv4", 00:17:36.358 "traddr": "10.0.0.1", 00:17:36.358 "trsvcid": "54544" 00:17:36.358 }, 00:17:36.358 "auth": { 00:17:36.358 "state": "completed", 00:17:36.358 "digest": "sha384", 00:17:36.358 "dhgroup": "ffdhe6144" 00:17:36.358 } 00:17:36.358 } 00:17:36.358 ]' 00:17:36.358 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.358 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.358 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.358 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:36.358 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.358 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.358 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.358 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.618 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:17:36.618 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:17:37.556 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.556 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:37.556 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.556 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.556 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.556 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.556 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:37.556 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:37.556 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:37.556 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.556 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.556 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:37.556 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:37.556 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.556 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:37.556 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.556 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.556 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.556 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:37.556 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.556 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.122 00:17:38.122 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.122 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.122 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.122 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.122 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.122 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.122 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.122 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.122 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.122 { 00:17:38.122 "cntlid": 87, 00:17:38.122 "qid": 0, 00:17:38.122 "state": "enabled", 00:17:38.122 "thread": "nvmf_tgt_poll_group_000", 00:17:38.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:38.122 "listen_address": { 00:17:38.122 "trtype": "TCP", 00:17:38.122 "adrfam": "IPv4", 00:17:38.122 "traddr": "10.0.0.2", 00:17:38.122 "trsvcid": "4420" 00:17:38.122 }, 00:17:38.122 "peer_address": { 00:17:38.122 "trtype": "TCP", 00:17:38.122 "adrfam": "IPv4", 00:17:38.122 "traddr": "10.0.0.1", 00:17:38.122 "trsvcid": "54580" 00:17:38.122 }, 00:17:38.122 "auth": { 00:17:38.122 "state": "completed", 00:17:38.122 "digest": "sha384", 00:17:38.122 "dhgroup": "ffdhe6144" 00:17:38.122 } 00:17:38.122 } 00:17:38.122 ]' 00:17:38.122 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.122 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.122 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.382 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:38.382 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.382 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.382 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.382 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.382 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:17:38.382 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:17:39.319 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.319 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.319 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.319 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.320 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.320 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.320 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.320 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:39.320 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:39.320 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:39.320 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.320 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:39.320 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:39.320 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:39.320 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.320 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.320 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.320 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.320 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.320 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.320 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.320 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.889 00:17:39.889 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.889 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.889 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.150 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.150 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.150 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.150 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.150 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.150 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.150 { 00:17:40.150 "cntlid": 89, 00:17:40.150 "qid": 0, 00:17:40.150 "state": "enabled", 00:17:40.150 "thread": "nvmf_tgt_poll_group_000", 00:17:40.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:40.150 "listen_address": { 00:17:40.150 "trtype": "TCP", 00:17:40.150 "adrfam": "IPv4", 00:17:40.150 "traddr": "10.0.0.2", 00:17:40.150 "trsvcid": "4420" 00:17:40.150 }, 00:17:40.150 "peer_address": { 00:17:40.150 "trtype": "TCP", 00:17:40.150 "adrfam": "IPv4", 00:17:40.150 "traddr": "10.0.0.1", 00:17:40.150 "trsvcid": "54624" 00:17:40.150 }, 00:17:40.150 "auth": { 00:17:40.150 "state": "completed", 00:17:40.150 "digest": "sha384", 00:17:40.150 "dhgroup": "ffdhe8192" 00:17:40.150 } 00:17:40.150 } 00:17:40.150 ]' 00:17:40.150 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.150 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.150 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.150 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.150 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.410 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.410 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.410 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.411 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:17:40.411 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:17:40.982 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.243 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.243 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.243 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.243 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.243 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.243 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:41.243 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:41.243 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:41.243 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.243 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:41.243 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:41.243 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:41.243 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.243 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.243 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.243 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.243 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.244 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.244 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.244 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.815 00:17:41.816 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.816 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.816 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.076 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.076 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.076 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.076 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.077 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.077 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.077 { 00:17:42.077 "cntlid": 91, 00:17:42.077 "qid": 0, 00:17:42.077 "state": "enabled", 00:17:42.077 "thread": "nvmf_tgt_poll_group_000", 00:17:42.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:42.077 "listen_address": { 00:17:42.077 "trtype": "TCP", 00:17:42.077 "adrfam": "IPv4", 00:17:42.077 "traddr": "10.0.0.2", 00:17:42.077 "trsvcid": "4420" 00:17:42.077 }, 00:17:42.077 "peer_address": { 00:17:42.077 "trtype": "TCP", 00:17:42.077 "adrfam": "IPv4", 00:17:42.077 "traddr": "10.0.0.1", 00:17:42.077 "trsvcid": "54652" 00:17:42.077 }, 00:17:42.077 "auth": { 00:17:42.077 "state": "completed", 00:17:42.077 "digest": "sha384", 00:17:42.077 "dhgroup": "ffdhe8192" 00:17:42.077 } 00:17:42.077 } 00:17:42.077 ]' 00:17:42.077 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.077 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.077 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.077 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:42.077 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.077 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.077 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.077 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.337 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:17:42.337 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.282 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.855 00:17:43.855 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.855 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.855 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.116 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.116 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.116 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.116 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.116 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.116 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.116 { 00:17:44.116 "cntlid": 93, 00:17:44.116 "qid": 0, 00:17:44.116 "state": "enabled", 00:17:44.116 "thread": "nvmf_tgt_poll_group_000", 00:17:44.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.116 "listen_address": { 00:17:44.116 "trtype": "TCP", 00:17:44.116 "adrfam": "IPv4", 00:17:44.116 "traddr": "10.0.0.2", 00:17:44.116 "trsvcid": "4420" 00:17:44.116 }, 00:17:44.116 "peer_address": { 00:17:44.116 "trtype": "TCP", 00:17:44.116 "adrfam": "IPv4", 00:17:44.116 "traddr": "10.0.0.1", 00:17:44.116 "trsvcid": "54686" 00:17:44.116 }, 00:17:44.116 "auth": { 00:17:44.116 "state": "completed", 00:17:44.116 "digest": "sha384", 00:17:44.116 "dhgroup": "ffdhe8192" 00:17:44.116 } 00:17:44.116 } 00:17:44.116 ]' 00:17:44.116 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.116 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.116 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.116 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:44.116 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.116 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.116 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.116 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.376 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:17:44.376 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:17:44.947 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.206 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.778 00:17:45.778 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.778 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.778 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.039 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.039 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.040 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.040 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.040 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.040 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.040 { 00:17:46.040 "cntlid": 95, 00:17:46.040 "qid": 0, 00:17:46.040 "state": "enabled", 00:17:46.040 "thread": "nvmf_tgt_poll_group_000", 00:17:46.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:46.040 "listen_address": { 00:17:46.040 "trtype": "TCP", 00:17:46.040 "adrfam": "IPv4", 00:17:46.040 "traddr": "10.0.0.2", 00:17:46.040 "trsvcid": "4420" 00:17:46.040 }, 00:17:46.040 "peer_address": { 00:17:46.040 "trtype": "TCP", 00:17:46.040 "adrfam": "IPv4", 00:17:46.040 "traddr": "10.0.0.1", 00:17:46.040 "trsvcid": "37014" 00:17:46.040 }, 00:17:46.040 "auth": { 00:17:46.040 "state": "completed", 00:17:46.040 "digest": "sha384", 00:17:46.040 "dhgroup": "ffdhe8192" 00:17:46.040 } 00:17:46.040 } 00:17:46.040 ]' 00:17:46.040 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.040 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.040 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.040 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:46.040 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.040 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.040 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.040 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.301 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:17:46.301 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.245 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.513 00:17:47.513 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.514 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.514 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.780 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.780 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.780 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.780 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.780 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.780 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.780 { 00:17:47.780 "cntlid": 97, 00:17:47.780 "qid": 0, 00:17:47.780 "state": "enabled", 00:17:47.780 "thread": "nvmf_tgt_poll_group_000", 00:17:47.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:47.780 "listen_address": { 00:17:47.780 "trtype": "TCP", 00:17:47.780 "adrfam": "IPv4", 00:17:47.780 "traddr": "10.0.0.2", 00:17:47.780 "trsvcid": "4420" 00:17:47.780 }, 00:17:47.780 "peer_address": { 00:17:47.780 "trtype": "TCP", 00:17:47.780 "adrfam": "IPv4", 00:17:47.780 "traddr": "10.0.0.1", 00:17:47.780 "trsvcid": "37052" 00:17:47.780 }, 00:17:47.780 "auth": { 00:17:47.780 "state": "completed", 00:17:47.780 "digest": "sha512", 00:17:47.780 "dhgroup": "null" 00:17:47.780 } 00:17:47.780 } 00:17:47.780 ]' 00:17:47.780 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.780 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.780 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.780 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:47.780 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.780 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.780 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.780 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.040 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:17:48.040 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.982 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.242 00:17:49.242 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.242 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.242 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.502 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.502 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.502 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.502 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.502 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.502 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.502 { 00:17:49.502 "cntlid": 99, 00:17:49.502 "qid": 0, 00:17:49.502 "state": "enabled", 00:17:49.502 "thread": "nvmf_tgt_poll_group_000", 00:17:49.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:49.502 "listen_address": { 00:17:49.502 "trtype": "TCP", 00:17:49.502 "adrfam": "IPv4", 00:17:49.502 "traddr": "10.0.0.2", 00:17:49.502 "trsvcid": "4420" 00:17:49.502 }, 00:17:49.502 "peer_address": { 00:17:49.502 "trtype": "TCP", 00:17:49.502 "adrfam": "IPv4", 00:17:49.502 "traddr": "10.0.0.1", 00:17:49.502 "trsvcid": "37076" 00:17:49.502 }, 00:17:49.502 "auth": { 00:17:49.502 "state": "completed", 00:17:49.502 "digest": "sha512", 00:17:49.502 "dhgroup": "null" 00:17:49.502 } 00:17:49.502 } 00:17:49.502 ]' 00:17:49.502 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.502 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.502 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.502 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:49.502 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.502 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.502 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.502 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.763 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:17:49.763 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:17:50.748 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.748 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.748 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.748 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.748 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.748 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.748 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:50.748 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:50.748 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:50.748 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.748 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.748 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:50.748 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:50.748 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.748 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.748 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.748 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.748 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.748 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.748 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.748 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.009 00:17:51.009 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.009 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.009 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.009 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.009 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.009 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.009 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.009 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.009 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.009 { 00:17:51.009 "cntlid": 101, 00:17:51.009 "qid": 0, 00:17:51.009 "state": "enabled", 00:17:51.009 "thread": "nvmf_tgt_poll_group_000", 00:17:51.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:51.009 "listen_address": { 00:17:51.009 "trtype": "TCP", 00:17:51.009 "adrfam": "IPv4", 00:17:51.009 "traddr": "10.0.0.2", 00:17:51.009 "trsvcid": "4420" 00:17:51.009 }, 00:17:51.009 "peer_address": { 00:17:51.009 "trtype": "TCP", 00:17:51.009 "adrfam": "IPv4", 00:17:51.009 "traddr": "10.0.0.1", 00:17:51.009 "trsvcid": "37102" 00:17:51.009 }, 00:17:51.009 "auth": { 00:17:51.009 "state": "completed", 00:17:51.009 "digest": "sha512", 00:17:51.009 "dhgroup": "null" 00:17:51.009 } 00:17:51.009 } 00:17:51.009 ]' 00:17:51.009 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.269 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.269 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.270 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:51.270 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.270 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.270 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.270 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.530 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:17:51.530 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:17:52.099 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.099 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.099 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.099 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.099 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.099 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.099 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:52.099 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:52.360 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:52.360 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.360 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.360 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:52.360 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:52.360 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.360 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:52.360 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.360 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.360 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.360 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:52.360 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.360 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.621 00:17:52.621 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.621 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.621 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.881 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.881 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.881 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.881 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.881 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.881 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.881 { 00:17:52.881 "cntlid": 103, 00:17:52.881 "qid": 0, 00:17:52.881 "state": "enabled", 00:17:52.881 "thread": "nvmf_tgt_poll_group_000", 00:17:52.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:52.881 "listen_address": { 00:17:52.881 "trtype": "TCP", 00:17:52.881 "adrfam": "IPv4", 00:17:52.881 "traddr": "10.0.0.2", 00:17:52.881 "trsvcid": "4420" 00:17:52.881 }, 00:17:52.881 "peer_address": { 00:17:52.881 "trtype": "TCP", 00:17:52.881 "adrfam": "IPv4", 00:17:52.881 "traddr": "10.0.0.1", 00:17:52.881 "trsvcid": "37134" 00:17:52.881 }, 00:17:52.881 "auth": { 00:17:52.881 "state": "completed", 00:17:52.881 "digest": "sha512", 00:17:52.881 "dhgroup": "null" 00:17:52.881 } 00:17:52.881 } 00:17:52.881 ]' 00:17:52.881 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.881 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.881 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.881 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:52.881 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.881 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.881 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.881 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.141 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:17:53.142 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.082 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.343 00:17:54.343 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.343 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.343 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.603 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.603 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.603 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.603 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.603 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.603 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.603 { 00:17:54.603 "cntlid": 105, 00:17:54.603 "qid": 0, 00:17:54.603 "state": "enabled", 00:17:54.603 "thread": "nvmf_tgt_poll_group_000", 00:17:54.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:54.603 "listen_address": { 00:17:54.603 "trtype": "TCP", 00:17:54.603 "adrfam": "IPv4", 00:17:54.603 "traddr": "10.0.0.2", 00:17:54.603 "trsvcid": "4420" 00:17:54.603 }, 00:17:54.603 "peer_address": { 00:17:54.603 "trtype": "TCP", 00:17:54.603 "adrfam": "IPv4", 00:17:54.603 "traddr": "10.0.0.1", 00:17:54.603 "trsvcid": "40702" 00:17:54.603 }, 00:17:54.603 "auth": { 00:17:54.603 "state": "completed", 00:17:54.603 "digest": "sha512", 00:17:54.603 "dhgroup": "ffdhe2048" 00:17:54.603 } 00:17:54.603 } 00:17:54.603 ]' 00:17:54.603 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.603 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.603 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.603 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:54.603 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.603 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.603 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.603 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.863 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:17:54.863 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.804 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.065 00:17:56.065 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.065 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.065 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.327 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.327 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.327 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.327 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.327 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.327 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.327 { 00:17:56.327 "cntlid": 107, 00:17:56.327 "qid": 0, 00:17:56.327 "state": "enabled", 00:17:56.327 "thread": "nvmf_tgt_poll_group_000", 00:17:56.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:56.327 "listen_address": { 00:17:56.327 "trtype": "TCP", 00:17:56.327 "adrfam": "IPv4", 00:17:56.327 "traddr": "10.0.0.2", 00:17:56.327 "trsvcid": "4420" 00:17:56.327 }, 00:17:56.327 "peer_address": { 00:17:56.327 "trtype": "TCP", 00:17:56.327 "adrfam": "IPv4", 00:17:56.327 "traddr": "10.0.0.1", 00:17:56.327 "trsvcid": "40724" 00:17:56.327 }, 00:17:56.327 "auth": { 00:17:56.327 "state": "completed", 00:17:56.327 "digest": "sha512", 00:17:56.327 "dhgroup": "ffdhe2048" 00:17:56.327 } 00:17:56.327 } 00:17:56.327 ]' 00:17:56.327 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.327 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.327 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.327 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:56.327 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.327 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.327 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.327 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.588 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:17:56.588 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:17:57.529 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.530 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.791 00:17:57.791 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.791 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.791 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.791 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.791 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.791 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.791 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.791 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.791 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.791 { 00:17:57.791 "cntlid": 109, 00:17:57.791 "qid": 0, 00:17:57.791 "state": "enabled", 00:17:57.791 "thread": "nvmf_tgt_poll_group_000", 00:17:57.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:57.791 "listen_address": { 00:17:57.791 "trtype": "TCP", 00:17:57.791 "adrfam": "IPv4", 00:17:57.791 "traddr": "10.0.0.2", 00:17:57.791 "trsvcid": "4420" 00:17:57.791 }, 00:17:57.791 "peer_address": { 00:17:57.791 "trtype": "TCP", 00:17:57.791 "adrfam": "IPv4", 00:17:57.791 "traddr": "10.0.0.1", 00:17:57.791 "trsvcid": "40744" 00:17:57.791 }, 00:17:57.791 "auth": { 00:17:57.791 "state": "completed", 00:17:57.791 "digest": "sha512", 00:17:57.791 "dhgroup": "ffdhe2048" 00:17:57.791 } 00:17:57.791 } 00:17:57.791 ]' 00:17:57.791 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.053 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.053 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.053 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:58.053 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.053 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.053 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.053 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.313 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:17:58.313 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:17:58.885 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.885 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.885 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.885 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.885 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.885 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.885 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:58.885 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:59.146 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:59.146 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.146 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.146 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:59.146 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:59.146 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.146 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:59.147 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.147 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.147 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.147 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:59.147 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:59.147 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:59.407 00:17:59.407 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.407 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.407 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.668 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.668 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.668 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.668 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.668 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.668 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.668 { 00:17:59.668 "cntlid": 111, 00:17:59.668 "qid": 0, 00:17:59.668 "state": "enabled", 00:17:59.668 "thread": "nvmf_tgt_poll_group_000", 00:17:59.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:59.668 "listen_address": { 00:17:59.668 "trtype": "TCP", 00:17:59.668 "adrfam": "IPv4", 00:17:59.668 "traddr": "10.0.0.2", 00:17:59.668 "trsvcid": "4420" 00:17:59.668 }, 00:17:59.668 "peer_address": { 00:17:59.668 "trtype": "TCP", 00:17:59.668 "adrfam": "IPv4", 00:17:59.668 "traddr": "10.0.0.1", 00:17:59.668 "trsvcid": "40760" 00:17:59.668 }, 00:17:59.668 "auth": { 00:17:59.668 "state": "completed", 00:17:59.668 "digest": "sha512", 00:17:59.668 "dhgroup": "ffdhe2048" 00:17:59.668 } 00:17:59.668 } 00:17:59.668 ]' 00:17:59.668 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.668 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.668 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.668 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:59.668 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.668 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.668 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.668 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.928 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:17:59.928 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:18:00.869 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.869 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.869 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.869 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.869 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.869 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.869 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.870 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.870 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.870 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:00.870 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.870 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.870 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:00.870 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:00.870 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.870 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.870 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.870 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.870 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.870 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.870 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.870 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.131 00:18:01.131 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.131 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.131 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.131 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.131 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.131 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.131 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.392 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.392 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.392 { 00:18:01.392 "cntlid": 113, 00:18:01.392 "qid": 0, 00:18:01.392 "state": "enabled", 00:18:01.392 "thread": "nvmf_tgt_poll_group_000", 00:18:01.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:01.392 "listen_address": { 00:18:01.392 "trtype": "TCP", 00:18:01.392 "adrfam": "IPv4", 00:18:01.392 "traddr": "10.0.0.2", 00:18:01.392 "trsvcid": "4420" 00:18:01.392 }, 00:18:01.392 "peer_address": { 00:18:01.392 "trtype": "TCP", 00:18:01.392 "adrfam": "IPv4", 00:18:01.392 "traddr": "10.0.0.1", 00:18:01.392 "trsvcid": "40776" 00:18:01.392 }, 00:18:01.392 "auth": { 00:18:01.392 "state": "completed", 00:18:01.392 "digest": "sha512", 00:18:01.392 "dhgroup": "ffdhe3072" 00:18:01.392 } 00:18:01.392 } 00:18:01.392 ]' 00:18:01.392 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.392 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.392 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.392 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:01.392 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.392 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.392 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.392 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.653 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:18:01.653 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:18:02.225 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.225 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.225 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.225 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.225 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.225 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.225 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.225 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.486 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:02.486 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.486 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.486 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:02.486 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:02.486 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.487 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.487 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.487 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.487 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.487 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.487 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.487 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.748 00:18:02.748 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.748 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.748 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.009 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.009 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.009 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.009 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.009 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.009 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.009 { 00:18:03.009 "cntlid": 115, 00:18:03.009 "qid": 0, 00:18:03.009 "state": "enabled", 00:18:03.009 "thread": "nvmf_tgt_poll_group_000", 00:18:03.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:03.009 "listen_address": { 00:18:03.009 "trtype": "TCP", 00:18:03.009 "adrfam": "IPv4", 00:18:03.009 "traddr": "10.0.0.2", 00:18:03.009 "trsvcid": "4420" 00:18:03.009 }, 00:18:03.009 "peer_address": { 00:18:03.009 "trtype": "TCP", 00:18:03.009 "adrfam": "IPv4", 00:18:03.009 "traddr": "10.0.0.1", 00:18:03.009 "trsvcid": "40808" 00:18:03.009 }, 00:18:03.009 "auth": { 00:18:03.009 "state": "completed", 00:18:03.009 "digest": "sha512", 00:18:03.009 "dhgroup": "ffdhe3072" 00:18:03.009 } 00:18:03.009 } 00:18:03.009 ]' 00:18:03.009 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.009 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.009 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.009 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:03.009 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.009 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.009 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.009 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.269 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:18:03.270 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.228 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.490 00:18:04.490 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.490 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.490 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.750 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.750 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.750 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.750 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.750 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.750 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.750 { 00:18:04.750 "cntlid": 117, 00:18:04.750 "qid": 0, 00:18:04.750 "state": "enabled", 00:18:04.750 "thread": "nvmf_tgt_poll_group_000", 00:18:04.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.750 "listen_address": { 00:18:04.750 "trtype": "TCP", 00:18:04.750 "adrfam": "IPv4", 00:18:04.750 "traddr": "10.0.0.2", 00:18:04.750 "trsvcid": "4420" 00:18:04.750 }, 00:18:04.750 "peer_address": { 00:18:04.750 "trtype": "TCP", 00:18:04.750 "adrfam": "IPv4", 00:18:04.750 "traddr": "10.0.0.1", 00:18:04.750 "trsvcid": "52100" 00:18:04.750 }, 00:18:04.750 "auth": { 00:18:04.750 "state": "completed", 00:18:04.750 "digest": "sha512", 00:18:04.750 "dhgroup": "ffdhe3072" 00:18:04.750 } 00:18:04.750 } 00:18:04.750 ]' 00:18:04.750 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.750 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.750 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.750 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:04.750 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.750 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.750 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.750 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.011 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:18:05.011 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:05.962 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.223 00:18:06.223 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.223 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.223 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.484 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.484 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.484 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.484 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.484 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.484 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.484 { 00:18:06.484 "cntlid": 119, 00:18:06.484 "qid": 0, 00:18:06.484 "state": "enabled", 00:18:06.484 "thread": "nvmf_tgt_poll_group_000", 00:18:06.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.484 "listen_address": { 00:18:06.484 "trtype": "TCP", 00:18:06.484 "adrfam": "IPv4", 00:18:06.485 "traddr": "10.0.0.2", 00:18:06.485 "trsvcid": "4420" 00:18:06.485 }, 00:18:06.485 "peer_address": { 00:18:06.485 "trtype": "TCP", 00:18:06.485 "adrfam": "IPv4", 00:18:06.485 "traddr": "10.0.0.1", 00:18:06.485 "trsvcid": "52132" 00:18:06.485 }, 00:18:06.485 "auth": { 00:18:06.485 "state": "completed", 00:18:06.485 "digest": "sha512", 00:18:06.485 "dhgroup": "ffdhe3072" 00:18:06.485 } 00:18:06.485 } 00:18:06.485 ]' 00:18:06.485 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.485 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.485 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.485 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:06.485 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.485 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.485 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.485 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.745 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:18:06.745 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:18:07.687 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.687 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.687 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.687 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.687 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.687 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.687 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.687 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.687 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.687 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:07.687 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.687 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.687 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:07.687 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:07.687 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.687 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.687 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.687 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.687 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.687 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.687 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.687 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.948 00:18:07.948 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.948 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.948 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.209 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.209 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.209 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.209 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.209 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.209 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.209 { 00:18:08.209 "cntlid": 121, 00:18:08.209 "qid": 0, 00:18:08.209 "state": "enabled", 00:18:08.209 "thread": "nvmf_tgt_poll_group_000", 00:18:08.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:08.209 "listen_address": { 00:18:08.209 "trtype": "TCP", 00:18:08.209 "adrfam": "IPv4", 00:18:08.209 "traddr": "10.0.0.2", 00:18:08.209 "trsvcid": "4420" 00:18:08.209 }, 00:18:08.209 "peer_address": { 00:18:08.209 "trtype": "TCP", 00:18:08.209 "adrfam": "IPv4", 00:18:08.209 "traddr": "10.0.0.1", 00:18:08.209 "trsvcid": "52162" 00:18:08.209 }, 00:18:08.209 "auth": { 00:18:08.209 "state": "completed", 00:18:08.209 "digest": "sha512", 00:18:08.209 "dhgroup": "ffdhe4096" 00:18:08.209 } 00:18:08.209 } 00:18:08.209 ]' 00:18:08.209 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.209 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.209 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.209 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:08.209 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.209 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.209 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.209 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.469 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:18:08.469 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:18:09.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:09.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:09.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:09.410 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.411 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.411 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.411 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.411 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.411 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.411 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.411 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.671 00:18:09.671 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.671 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.671 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.933 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.933 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.933 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.933 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.933 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.933 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.933 { 00:18:09.933 "cntlid": 123, 00:18:09.933 "qid": 0, 00:18:09.933 "state": "enabled", 00:18:09.933 "thread": "nvmf_tgt_poll_group_000", 00:18:09.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.933 "listen_address": { 00:18:09.933 "trtype": "TCP", 00:18:09.933 "adrfam": "IPv4", 00:18:09.933 "traddr": "10.0.0.2", 00:18:09.933 "trsvcid": "4420" 00:18:09.933 }, 00:18:09.933 "peer_address": { 00:18:09.933 "trtype": "TCP", 00:18:09.933 "adrfam": "IPv4", 00:18:09.933 "traddr": "10.0.0.1", 00:18:09.933 "trsvcid": "52188" 00:18:09.933 }, 00:18:09.933 "auth": { 00:18:09.933 "state": "completed", 00:18:09.933 "digest": "sha512", 00:18:09.933 "dhgroup": "ffdhe4096" 00:18:09.933 } 00:18:09.933 } 00:18:09.933 ]' 00:18:09.933 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.933 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.933 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.933 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:09.933 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.933 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.933 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.933 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.198 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:18:10.198 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.145 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.409 00:18:11.409 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.409 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.409 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.670 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.670 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.670 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.670 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.670 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.670 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.670 { 00:18:11.670 "cntlid": 125, 00:18:11.670 "qid": 0, 00:18:11.670 "state": "enabled", 00:18:11.670 "thread": "nvmf_tgt_poll_group_000", 00:18:11.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.670 "listen_address": { 00:18:11.670 "trtype": "TCP", 00:18:11.670 "adrfam": "IPv4", 00:18:11.670 "traddr": "10.0.0.2", 00:18:11.670 "trsvcid": "4420" 00:18:11.670 }, 00:18:11.670 "peer_address": { 00:18:11.670 "trtype": "TCP", 00:18:11.670 "adrfam": "IPv4", 00:18:11.670 "traddr": "10.0.0.1", 00:18:11.670 "trsvcid": "52216" 00:18:11.670 }, 00:18:11.670 "auth": { 00:18:11.670 "state": "completed", 00:18:11.670 "digest": "sha512", 00:18:11.670 "dhgroup": "ffdhe4096" 00:18:11.670 } 00:18:11.670 } 00:18:11.670 ]' 00:18:11.670 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.670 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.670 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.670 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:11.670 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.670 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.670 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.670 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.931 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:18:11.931 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.872 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:13.134 00:18:13.134 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.134 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.134 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.395 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.395 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.395 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.395 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.395 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.395 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.395 { 00:18:13.395 "cntlid": 127, 00:18:13.395 "qid": 0, 00:18:13.395 "state": "enabled", 00:18:13.395 "thread": "nvmf_tgt_poll_group_000", 00:18:13.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:13.395 "listen_address": { 00:18:13.395 "trtype": "TCP", 00:18:13.395 "adrfam": "IPv4", 00:18:13.395 "traddr": "10.0.0.2", 00:18:13.395 "trsvcid": "4420" 00:18:13.395 }, 00:18:13.395 "peer_address": { 00:18:13.395 "trtype": "TCP", 00:18:13.395 "adrfam": "IPv4", 00:18:13.395 "traddr": "10.0.0.1", 00:18:13.395 "trsvcid": "52248" 00:18:13.395 }, 00:18:13.395 "auth": { 00:18:13.395 "state": "completed", 00:18:13.395 "digest": "sha512", 00:18:13.395 "dhgroup": "ffdhe4096" 00:18:13.395 } 00:18:13.395 } 00:18:13.395 ]' 00:18:13.395 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.395 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.395 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.395 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:13.395 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.655 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.655 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.655 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.655 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:18:13.655 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:18:14.598 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.598 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.598 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.598 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.598 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.598 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.598 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.598 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.598 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.598 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:14.598 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.598 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.598 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:14.598 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:14.598 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.598 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.598 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.598 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.598 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.598 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.598 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.598 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.920 00:18:15.278 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.278 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.278 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.278 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.278 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.278 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.278 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.278 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.278 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.278 { 00:18:15.278 "cntlid": 129, 00:18:15.278 "qid": 0, 00:18:15.278 "state": "enabled", 00:18:15.278 "thread": "nvmf_tgt_poll_group_000", 00:18:15.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:15.278 "listen_address": { 00:18:15.278 "trtype": "TCP", 00:18:15.278 "adrfam": "IPv4", 00:18:15.278 "traddr": "10.0.0.2", 00:18:15.278 "trsvcid": "4420" 00:18:15.278 }, 00:18:15.278 "peer_address": { 00:18:15.278 "trtype": "TCP", 00:18:15.278 "adrfam": "IPv4", 00:18:15.278 "traddr": "10.0.0.1", 00:18:15.278 "trsvcid": "42802" 00:18:15.279 }, 00:18:15.279 "auth": { 00:18:15.279 "state": "completed", 00:18:15.279 "digest": "sha512", 00:18:15.279 "dhgroup": "ffdhe6144" 00:18:15.279 } 00:18:15.279 } 00:18:15.279 ]' 00:18:15.279 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.279 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.279 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.279 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:15.279 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.279 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.279 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.279 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.538 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:18:15.539 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.480 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.741 00:18:16.741 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.741 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.741 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.003 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.003 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.003 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.003 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.003 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.003 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.003 { 00:18:17.003 "cntlid": 131, 00:18:17.003 "qid": 0, 00:18:17.003 "state": "enabled", 00:18:17.003 "thread": "nvmf_tgt_poll_group_000", 00:18:17.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:17.003 "listen_address": { 00:18:17.003 "trtype": "TCP", 00:18:17.003 "adrfam": "IPv4", 00:18:17.003 "traddr": "10.0.0.2", 00:18:17.003 "trsvcid": "4420" 00:18:17.003 }, 00:18:17.003 "peer_address": { 00:18:17.003 "trtype": "TCP", 00:18:17.003 "adrfam": "IPv4", 00:18:17.003 "traddr": "10.0.0.1", 00:18:17.003 "trsvcid": "42840" 00:18:17.003 }, 00:18:17.003 "auth": { 00:18:17.003 "state": "completed", 00:18:17.003 "digest": "sha512", 00:18:17.003 "dhgroup": "ffdhe6144" 00:18:17.003 } 00:18:17.003 } 00:18:17.003 ]' 00:18:17.003 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.003 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.003 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.264 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:17.264 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.264 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.264 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.264 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.264 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:18:17.264 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.206 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.778 00:18:18.778 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.778 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.778 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.778 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.778 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.778 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.778 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.778 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.778 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.778 { 00:18:18.778 "cntlid": 133, 00:18:18.778 "qid": 0, 00:18:18.778 "state": "enabled", 00:18:18.778 "thread": "nvmf_tgt_poll_group_000", 00:18:18.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:18.778 "listen_address": { 00:18:18.778 "trtype": "TCP", 00:18:18.778 "adrfam": "IPv4", 00:18:18.778 "traddr": "10.0.0.2", 00:18:18.778 "trsvcid": "4420" 00:18:18.778 }, 00:18:18.778 "peer_address": { 00:18:18.778 "trtype": "TCP", 00:18:18.778 "adrfam": "IPv4", 00:18:18.778 "traddr": "10.0.0.1", 00:18:18.778 "trsvcid": "42862" 00:18:18.778 }, 00:18:18.778 "auth": { 00:18:18.778 "state": "completed", 00:18:18.778 "digest": "sha512", 00:18:18.778 "dhgroup": "ffdhe6144" 00:18:18.778 } 00:18:18.778 } 00:18:18.778 ]' 00:18:18.778 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.039 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.039 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.039 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:19.039 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.039 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.039 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.039 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.299 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:18:19.299 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:18:19.870 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.130 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.130 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.130 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.130 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.130 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.130 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:20.130 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:20.130 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:20.130 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.130 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.130 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:20.130 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:20.130 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.130 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:20.130 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.130 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.390 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.390 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:20.390 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.390 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.651 00:18:20.651 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.651 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.651 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.912 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.912 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.912 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.912 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.912 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.912 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.912 { 00:18:20.912 "cntlid": 135, 00:18:20.912 "qid": 0, 00:18:20.912 "state": "enabled", 00:18:20.912 "thread": "nvmf_tgt_poll_group_000", 00:18:20.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:20.912 "listen_address": { 00:18:20.912 "trtype": "TCP", 00:18:20.912 "adrfam": "IPv4", 00:18:20.912 "traddr": "10.0.0.2", 00:18:20.912 "trsvcid": "4420" 00:18:20.912 }, 00:18:20.912 "peer_address": { 00:18:20.912 "trtype": "TCP", 00:18:20.912 "adrfam": "IPv4", 00:18:20.912 "traddr": "10.0.0.1", 00:18:20.912 "trsvcid": "42880" 00:18:20.912 }, 00:18:20.912 "auth": { 00:18:20.912 "state": "completed", 00:18:20.912 "digest": "sha512", 00:18:20.912 "dhgroup": "ffdhe6144" 00:18:20.912 } 00:18:20.912 } 00:18:20.912 ]' 00:18:20.912 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.912 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.912 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.912 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:20.912 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.912 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.912 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.912 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.174 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:18:21.174 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:18:21.746 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.007 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.578 00:18:22.578 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.578 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.578 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.839 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.839 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.839 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.839 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.839 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.839 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.839 { 00:18:22.839 "cntlid": 137, 00:18:22.839 "qid": 0, 00:18:22.839 "state": "enabled", 00:18:22.839 "thread": "nvmf_tgt_poll_group_000", 00:18:22.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:22.840 "listen_address": { 00:18:22.840 "trtype": "TCP", 00:18:22.840 "adrfam": "IPv4", 00:18:22.840 "traddr": "10.0.0.2", 00:18:22.840 "trsvcid": "4420" 00:18:22.840 }, 00:18:22.840 "peer_address": { 00:18:22.840 "trtype": "TCP", 00:18:22.840 "adrfam": "IPv4", 00:18:22.840 "traddr": "10.0.0.1", 00:18:22.840 "trsvcid": "42922" 00:18:22.840 }, 00:18:22.840 "auth": { 00:18:22.840 "state": "completed", 00:18:22.840 "digest": "sha512", 00:18:22.840 "dhgroup": "ffdhe8192" 00:18:22.840 } 00:18:22.840 } 00:18:22.840 ]' 00:18:22.840 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.840 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.840 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.840 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:22.840 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.840 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.840 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.840 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.101 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:18:23.101 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.616 00:18:24.616 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.616 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.616 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.877 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.878 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.878 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.878 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.878 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.878 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.878 { 00:18:24.878 "cntlid": 139, 00:18:24.878 "qid": 0, 00:18:24.878 "state": "enabled", 00:18:24.878 "thread": "nvmf_tgt_poll_group_000", 00:18:24.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:24.878 "listen_address": { 00:18:24.878 "trtype": "TCP", 00:18:24.878 "adrfam": "IPv4", 00:18:24.878 "traddr": "10.0.0.2", 00:18:24.878 "trsvcid": "4420" 00:18:24.878 }, 00:18:24.878 "peer_address": { 00:18:24.878 "trtype": "TCP", 00:18:24.878 "adrfam": "IPv4", 00:18:24.878 "traddr": "10.0.0.1", 00:18:24.878 "trsvcid": "44140" 00:18:24.878 }, 00:18:24.878 "auth": { 00:18:24.878 "state": "completed", 00:18:24.878 "digest": "sha512", 00:18:24.878 "dhgroup": "ffdhe8192" 00:18:24.878 } 00:18:24.878 } 00:18:24.878 ]' 00:18:24.878 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.878 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.878 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.878 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.878 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.878 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.878 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.878 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.145 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:18:25.145 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: --dhchap-ctrl-secret DHHC-1:02:MjZlMmNkOGU1NTZjOWRiZmY4NTYyMjc5YjdjOTUyZGE0NTUwNjIzMmE1NDliZjE1CQwQOA==: 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.093 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.665 00:18:26.665 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.665 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.665 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.665 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.665 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.665 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.665 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.665 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.665 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.665 { 00:18:26.665 "cntlid": 141, 00:18:26.665 "qid": 0, 00:18:26.665 "state": "enabled", 00:18:26.665 "thread": "nvmf_tgt_poll_group_000", 00:18:26.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:26.665 "listen_address": { 00:18:26.665 "trtype": "TCP", 00:18:26.665 "adrfam": "IPv4", 00:18:26.665 "traddr": "10.0.0.2", 00:18:26.665 "trsvcid": "4420" 00:18:26.665 }, 00:18:26.665 "peer_address": { 00:18:26.665 "trtype": "TCP", 00:18:26.665 "adrfam": "IPv4", 00:18:26.665 "traddr": "10.0.0.1", 00:18:26.665 "trsvcid": "44168" 00:18:26.665 }, 00:18:26.665 "auth": { 00:18:26.665 "state": "completed", 00:18:26.665 "digest": "sha512", 00:18:26.665 "dhgroup": "ffdhe8192" 00:18:26.665 } 00:18:26.665 } 00:18:26.665 ]' 00:18:26.665 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.926 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.926 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.926 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:26.926 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.926 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.926 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.926 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.187 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:18:27.187 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:01:NTk4YWJlMTQ5ZGVhNGE4YzNlMzhjYWYxOTFlOTkwMDHy003/: 00:18:27.757 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.758 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.758 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.758 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.758 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.758 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.758 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.758 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:28.018 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:28.018 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.018 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.018 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:28.018 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:28.018 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.018 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:28.018 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.018 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.018 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.018 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:28.018 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.018 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.589 00:18:28.589 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.590 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.590 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.851 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.851 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.851 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.851 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.851 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.851 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.851 { 00:18:28.851 "cntlid": 143, 00:18:28.851 "qid": 0, 00:18:28.851 "state": "enabled", 00:18:28.851 "thread": "nvmf_tgt_poll_group_000", 00:18:28.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:28.851 "listen_address": { 00:18:28.851 "trtype": "TCP", 00:18:28.851 "adrfam": "IPv4", 00:18:28.851 "traddr": "10.0.0.2", 00:18:28.851 "trsvcid": "4420" 00:18:28.851 }, 00:18:28.851 "peer_address": { 00:18:28.851 "trtype": "TCP", 00:18:28.851 "adrfam": "IPv4", 00:18:28.851 "traddr": "10.0.0.1", 00:18:28.851 "trsvcid": "44194" 00:18:28.851 }, 00:18:28.851 "auth": { 00:18:28.851 "state": "completed", 00:18:28.851 "digest": "sha512", 00:18:28.851 "dhgroup": "ffdhe8192" 00:18:28.851 } 00:18:28.851 } 00:18:28.851 ]' 00:18:28.851 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.851 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.851 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.851 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.851 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.851 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.851 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.851 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.111 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:18:29.111 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.053 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.624 00:18:30.624 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.624 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.624 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.624 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.885 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.885 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.885 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.885 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.885 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.885 { 00:18:30.885 "cntlid": 145, 00:18:30.885 "qid": 0, 00:18:30.885 "state": "enabled", 00:18:30.885 "thread": "nvmf_tgt_poll_group_000", 00:18:30.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:30.885 "listen_address": { 00:18:30.885 "trtype": "TCP", 00:18:30.885 "adrfam": "IPv4", 00:18:30.885 "traddr": "10.0.0.2", 00:18:30.885 "trsvcid": "4420" 00:18:30.885 }, 00:18:30.885 "peer_address": { 00:18:30.885 "trtype": "TCP", 00:18:30.885 "adrfam": "IPv4", 00:18:30.885 "traddr": "10.0.0.1", 00:18:30.885 "trsvcid": "44220" 00:18:30.885 }, 00:18:30.885 "auth": { 00:18:30.885 "state": "completed", 00:18:30.885 "digest": "sha512", 00:18:30.885 "dhgroup": "ffdhe8192" 00:18:30.885 } 00:18:30.885 } 00:18:30.885 ]' 00:18:30.885 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.885 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.885 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.885 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.885 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.885 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.885 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.885 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.146 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:18:31.146 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTYwNTdhODk3NGMzM2YzMDI2NjQ0NDgwNDM2NDQzYzE4MGUyN2MwMGVkZjliNTgw1Niqjw==: --dhchap-ctrl-secret DHHC-1:03:MTE2Y2VjODg5OGRkYjFiN2I5M2QwM2Y1ZDAzMjcyMmM1OGI4NjMzMWU3ZjUzMDcyMzJhMjEzNzY2MDE1YzMyN261h3M=: 00:18:31.718 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.718 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.718 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.718 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.718 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.718 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:31.718 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.718 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.718 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.718 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:31.718 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:31.718 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:31.718 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:31.978 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.979 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:31.979 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.979 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:31.979 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:31.979 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:32.239 request: 00:18:32.239 { 00:18:32.239 "name": "nvme0", 00:18:32.239 "trtype": "tcp", 00:18:32.239 "traddr": "10.0.0.2", 00:18:32.239 "adrfam": "ipv4", 00:18:32.239 "trsvcid": "4420", 00:18:32.239 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:32.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:32.239 "prchk_reftag": false, 00:18:32.239 "prchk_guard": false, 00:18:32.239 "hdgst": false, 00:18:32.239 "ddgst": false, 00:18:32.239 "dhchap_key": "key2", 00:18:32.239 "allow_unrecognized_csi": false, 00:18:32.239 "method": "bdev_nvme_attach_controller", 00:18:32.239 "req_id": 1 00:18:32.239 } 00:18:32.239 Got JSON-RPC error response 00:18:32.239 response: 00:18:32.239 { 00:18:32.239 "code": -5, 00:18:32.239 "message": "Input/output error" 00:18:32.239 } 00:18:32.239 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:32.239 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:32.239 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:32.239 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:32.239 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.239 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.239 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.239 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.239 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.239 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.239 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.239 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.239 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:32.239 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:32.239 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:32.239 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:32.239 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.239 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:32.501 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.501 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:32.501 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:32.501 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:32.763 request: 00:18:32.763 { 00:18:32.763 "name": "nvme0", 00:18:32.763 "trtype": "tcp", 00:18:32.763 "traddr": "10.0.0.2", 00:18:32.763 "adrfam": "ipv4", 00:18:32.763 "trsvcid": "4420", 00:18:32.763 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:32.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:32.763 "prchk_reftag": false, 00:18:32.763 "prchk_guard": false, 00:18:32.763 "hdgst": false, 00:18:32.763 "ddgst": false, 00:18:32.763 "dhchap_key": "key1", 00:18:32.763 "dhchap_ctrlr_key": "ckey2", 00:18:32.763 "allow_unrecognized_csi": false, 00:18:32.763 "method": "bdev_nvme_attach_controller", 00:18:32.763 "req_id": 1 00:18:32.763 } 00:18:32.763 Got JSON-RPC error response 00:18:32.763 response: 00:18:32.763 { 00:18:32.763 "code": -5, 00:18:32.763 "message": "Input/output error" 00:18:32.763 } 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.763 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.335 request: 00:18:33.335 { 00:18:33.335 "name": "nvme0", 00:18:33.335 "trtype": "tcp", 00:18:33.335 "traddr": "10.0.0.2", 00:18:33.335 "adrfam": "ipv4", 00:18:33.335 "trsvcid": "4420", 00:18:33.335 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:33.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:33.335 "prchk_reftag": false, 00:18:33.335 "prchk_guard": false, 00:18:33.335 "hdgst": false, 00:18:33.335 "ddgst": false, 00:18:33.335 "dhchap_key": "key1", 00:18:33.335 "dhchap_ctrlr_key": "ckey1", 00:18:33.335 "allow_unrecognized_csi": false, 00:18:33.335 "method": "bdev_nvme_attach_controller", 00:18:33.335 "req_id": 1 00:18:33.335 } 00:18:33.335 Got JSON-RPC error response 00:18:33.335 response: 00:18:33.335 { 00:18:33.335 "code": -5, 00:18:33.335 "message": "Input/output error" 00:18:33.335 } 00:18:33.335 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:33.335 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:33.335 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:33.335 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:33.335 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.335 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.335 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.335 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.335 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1615678 00:18:33.335 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1615678 ']' 00:18:33.335 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1615678 00:18:33.335 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:33.335 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:33.335 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1615678 00:18:33.335 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:33.335 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:33.335 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1615678' 00:18:33.335 killing process with pid 1615678 00:18:33.335 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1615678 00:18:33.335 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1615678 00:18:33.595 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:33.595 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:33.595 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:33.595 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.595 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1643513 00:18:33.595 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1643513 00:18:33.595 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:33.595 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1643513 ']' 00:18:33.595 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.595 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:33.595 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.595 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:33.595 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.535 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:34.535 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:34.535 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:34.535 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:34.535 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.535 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.535 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:34.535 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1643513 00:18:34.535 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1643513 ']' 00:18:34.535 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.535 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:34.535 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.535 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:34.535 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.535 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:34.535 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:34.535 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:34.535 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.535 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.535 null0 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CDS 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.0kV ]] 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0kV 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.gUT 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.RTx ]] 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RTx 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5Kj 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.L1m ]] 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L1m 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.qCf 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.795 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:35.735 nvme0n1 00:18:35.735 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.735 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.735 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.735 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.735 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.735 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.735 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.735 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.735 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.735 { 00:18:35.735 "cntlid": 1, 00:18:35.735 "qid": 0, 00:18:35.735 "state": "enabled", 00:18:35.735 "thread": "nvmf_tgt_poll_group_000", 00:18:35.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:35.735 "listen_address": { 00:18:35.735 "trtype": "TCP", 00:18:35.735 "adrfam": "IPv4", 00:18:35.735 "traddr": "10.0.0.2", 00:18:35.735 "trsvcid": "4420" 00:18:35.735 }, 00:18:35.735 "peer_address": { 00:18:35.735 "trtype": "TCP", 00:18:35.735 "adrfam": "IPv4", 00:18:35.735 "traddr": "10.0.0.1", 00:18:35.735 "trsvcid": "58454" 00:18:35.735 }, 00:18:35.735 "auth": { 00:18:35.735 "state": "completed", 00:18:35.735 "digest": "sha512", 00:18:35.735 "dhgroup": "ffdhe8192" 00:18:35.735 } 00:18:35.735 } 00:18:35.735 ]' 00:18:35.735 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.995 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.995 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.995 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:35.995 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.995 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.995 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.995 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.256 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:18:36.256 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:18:36.826 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.826 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.826 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.826 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.826 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.826 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:36.826 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.826 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.826 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.826 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:36.826 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:37.086 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:37.087 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:37.087 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:37.087 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:37.087 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.087 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:37.087 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.087 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:37.087 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:37.087 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:37.347 request: 00:18:37.347 { 00:18:37.347 "name": "nvme0", 00:18:37.347 "trtype": "tcp", 00:18:37.347 "traddr": "10.0.0.2", 00:18:37.347 "adrfam": "ipv4", 00:18:37.347 "trsvcid": "4420", 00:18:37.347 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:37.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:37.347 "prchk_reftag": false, 00:18:37.347 "prchk_guard": false, 00:18:37.347 "hdgst": false, 00:18:37.347 "ddgst": false, 00:18:37.347 "dhchap_key": "key3", 00:18:37.347 "allow_unrecognized_csi": false, 00:18:37.347 "method": "bdev_nvme_attach_controller", 00:18:37.347 "req_id": 1 00:18:37.347 } 00:18:37.347 Got JSON-RPC error response 00:18:37.347 response: 00:18:37.347 { 00:18:37.347 "code": -5, 00:18:37.347 "message": "Input/output error" 00:18:37.347 } 00:18:37.347 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:37.347 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:37.347 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:37.347 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:37.347 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:37.347 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:37.347 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:37.347 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:37.347 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:37.347 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:37.347 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:37.347 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:37.347 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.347 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:37.347 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.348 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:37.348 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:37.348 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:37.608 request: 00:18:37.608 { 00:18:37.608 "name": "nvme0", 00:18:37.608 "trtype": "tcp", 00:18:37.608 "traddr": "10.0.0.2", 00:18:37.608 "adrfam": "ipv4", 00:18:37.608 "trsvcid": "4420", 00:18:37.608 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:37.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:37.608 "prchk_reftag": false, 00:18:37.608 "prchk_guard": false, 00:18:37.608 "hdgst": false, 00:18:37.608 "ddgst": false, 00:18:37.608 "dhchap_key": "key3", 00:18:37.608 "allow_unrecognized_csi": false, 00:18:37.608 "method": "bdev_nvme_attach_controller", 00:18:37.608 "req_id": 1 00:18:37.608 } 00:18:37.608 Got JSON-RPC error response 00:18:37.608 response: 00:18:37.608 { 00:18:37.608 "code": -5, 00:18:37.608 "message": "Input/output error" 00:18:37.608 } 00:18:37.608 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:37.608 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:37.608 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:37.608 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:37.608 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:37.608 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:37.608 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:37.608 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:37.608 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:37.608 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:37.869 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.869 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.869 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.869 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.869 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.869 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.869 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.869 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.869 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:37.869 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:37.869 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:37.869 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:37.869 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.869 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:37.869 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.869 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:37.869 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:37.869 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:38.129 request: 00:18:38.129 { 00:18:38.129 "name": "nvme0", 00:18:38.129 "trtype": "tcp", 00:18:38.129 "traddr": "10.0.0.2", 00:18:38.129 "adrfam": "ipv4", 00:18:38.129 "trsvcid": "4420", 00:18:38.129 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:38.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:38.129 "prchk_reftag": false, 00:18:38.129 "prchk_guard": false, 00:18:38.129 "hdgst": false, 00:18:38.129 "ddgst": false, 00:18:38.129 "dhchap_key": "key0", 00:18:38.129 "dhchap_ctrlr_key": "key1", 00:18:38.129 "allow_unrecognized_csi": false, 00:18:38.129 "method": "bdev_nvme_attach_controller", 00:18:38.129 "req_id": 1 00:18:38.129 } 00:18:38.129 Got JSON-RPC error response 00:18:38.129 response: 00:18:38.129 { 00:18:38.129 "code": -5, 00:18:38.129 "message": "Input/output error" 00:18:38.129 } 00:18:38.129 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:38.129 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:38.129 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:38.129 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:38.129 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:38.130 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:38.130 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:38.390 nvme0n1 00:18:38.390 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:38.390 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:38.390 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.651 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.651 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.651 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.651 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:38.651 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.651 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.651 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.651 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:38.651 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:38.651 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:39.591 nvme0n1 00:18:39.591 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:39.591 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:39.591 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.851 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.851 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:39.851 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.851 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.851 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.851 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:39.851 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:39.851 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.111 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.111 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:18:40.111 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: --dhchap-ctrl-secret DHHC-1:03:NzZkZGYwMmQzZmI0ZWJiYjRiMjlhMGI2N2Y1OWYwNmUyZGMzZTM3NjdkMmY5OTY2ZjQzYjdjZGQ4MDY3YTkwYRVnsh8=: 00:18:40.684 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:40.684 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:40.684 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:40.684 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:40.684 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:40.684 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:40.684 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:40.684 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.684 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.946 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:40.946 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:40.946 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:40.946 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:40.946 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.946 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:40.946 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.946 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:40.946 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:40.946 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:41.518 request: 00:18:41.518 { 00:18:41.518 "name": "nvme0", 00:18:41.518 "trtype": "tcp", 00:18:41.518 "traddr": "10.0.0.2", 00:18:41.518 "adrfam": "ipv4", 00:18:41.518 "trsvcid": "4420", 00:18:41.518 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:41.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:41.518 "prchk_reftag": false, 00:18:41.518 "prchk_guard": false, 00:18:41.518 "hdgst": false, 00:18:41.518 "ddgst": false, 00:18:41.518 "dhchap_key": "key1", 00:18:41.518 "allow_unrecognized_csi": false, 00:18:41.518 "method": "bdev_nvme_attach_controller", 00:18:41.518 "req_id": 1 00:18:41.518 } 00:18:41.518 Got JSON-RPC error response 00:18:41.518 response: 00:18:41.518 { 00:18:41.518 "code": -5, 00:18:41.518 "message": "Input/output error" 00:18:41.518 } 00:18:41.518 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:41.518 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:41.518 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:41.518 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:41.518 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:41.518 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:41.518 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:42.459 nvme0n1 00:18:42.459 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:42.459 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:42.459 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.459 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.459 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.459 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.720 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.720 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.720 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.720 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.720 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:42.720 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:42.720 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:42.720 nvme0n1 00:18:42.980 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:42.980 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:42.980 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.980 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.980 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.980 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.241 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:43.241 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.241 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.241 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.241 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: '' 2s 00:18:43.241 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:43.241 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:43.241 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: 00:18:43.241 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:43.241 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:43.241 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:43.241 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: ]] 00:18:43.241 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MThmYzExZWY1Y2FiMTZiNGUxZjEyOGMxODYyMGNiODTXt3rn: 00:18:43.241 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:43.241 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:43.241 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: 2s 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: ]] 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Y2I0OGNhZTUwNmFmYWQxYmY2ZTUwYWJmYzhjMzk3YTlkNGIxNjQyMTgyOTAwNzg06S6utQ==: 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:45.162 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:47.709 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:47.709 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:47.709 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:47.709 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:47.709 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:47.709 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:47.709 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:47.709 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.709 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:47.709 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.709 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.709 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.709 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:47.709 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:47.709 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:48.279 nvme0n1 00:18:48.279 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:48.279 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.279 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.279 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.279 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:48.279 12:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:48.850 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:48.850 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:48.850 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.850 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.850 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.850 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.850 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.850 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.850 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:48.850 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:49.111 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:49.111 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.111 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:49.371 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.371 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:49.371 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.371 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.371 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.371 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:49.371 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:49.371 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:49.371 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:49.371 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.371 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:49.371 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.371 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:49.372 12:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:49.942 request: 00:18:49.942 { 00:18:49.942 "name": "nvme0", 00:18:49.942 "dhchap_key": "key1", 00:18:49.942 "dhchap_ctrlr_key": "key3", 00:18:49.942 "method": "bdev_nvme_set_keys", 00:18:49.942 "req_id": 1 00:18:49.942 } 00:18:49.942 Got JSON-RPC error response 00:18:49.942 response: 00:18:49.942 { 00:18:49.942 "code": -13, 00:18:49.942 "message": "Permission denied" 00:18:49.942 } 00:18:49.942 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:49.942 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:49.942 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:49.942 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:49.942 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:49.942 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:49.942 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.942 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:49.942 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:50.891 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:50.891 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.891 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:51.152 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:51.152 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:51.152 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.152 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.152 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.152 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:51.152 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:51.152 12:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:52.093 nvme0n1 00:18:52.093 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:52.093 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.093 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.093 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.093 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:52.093 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:52.094 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:52.094 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:52.094 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:52.094 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:52.094 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:52.094 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:52.094 12:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:52.665 request: 00:18:52.665 { 00:18:52.665 "name": "nvme0", 00:18:52.665 "dhchap_key": "key2", 00:18:52.665 "dhchap_ctrlr_key": "key0", 00:18:52.665 "method": "bdev_nvme_set_keys", 00:18:52.665 "req_id": 1 00:18:52.665 } 00:18:52.665 Got JSON-RPC error response 00:18:52.665 response: 00:18:52.665 { 00:18:52.665 "code": -13, 00:18:52.665 "message": "Permission denied" 00:18:52.665 } 00:18:52.665 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:52.665 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:52.665 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:52.665 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:52.665 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:52.665 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:52.665 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.665 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:52.665 12:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1615900 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1615900 ']' 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1615900 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1615900 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1615900' 00:18:54.049 killing process with pid 1615900 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1615900 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1615900 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:54.049 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:54.310 rmmod nvme_tcp 00:18:54.310 rmmod nvme_fabrics 00:18:54.310 rmmod nvme_keyring 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 1643513 ']' 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 1643513 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1643513 ']' 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1643513 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1643513 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1643513' 00:18:54.310 killing process with pid 1643513 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1643513 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1643513 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.310 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.861 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:56.861 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.CDS /tmp/spdk.key-sha256.gUT /tmp/spdk.key-sha384.5Kj /tmp/spdk.key-sha512.qCf /tmp/spdk.key-sha512.0kV /tmp/spdk.key-sha384.RTx /tmp/spdk.key-sha256.L1m '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:56.861 00:18:56.861 real 2m44.586s 00:18:56.861 user 6m6.918s 00:18:56.861 sys 0m24.089s 00:18:56.861 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:56.861 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.861 ************************************ 00:18:56.861 END TEST nvmf_auth_target 00:18:56.861 ************************************ 00:18:56.861 12:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:56.861 12:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:56.861 12:23:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:56.861 12:23:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:56.861 12:23:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:56.861 ************************************ 00:18:56.861 START TEST nvmf_bdevio_no_huge 00:18:56.861 ************************************ 00:18:56.861 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:56.862 * Looking for test storage... 00:18:56.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:56.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.862 --rc genhtml_branch_coverage=1 00:18:56.862 --rc genhtml_function_coverage=1 00:18:56.862 --rc genhtml_legend=1 00:18:56.862 --rc geninfo_all_blocks=1 00:18:56.862 --rc geninfo_unexecuted_blocks=1 00:18:56.862 00:18:56.862 ' 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:56.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.862 --rc genhtml_branch_coverage=1 00:18:56.862 --rc genhtml_function_coverage=1 00:18:56.862 --rc genhtml_legend=1 00:18:56.862 --rc geninfo_all_blocks=1 00:18:56.862 --rc geninfo_unexecuted_blocks=1 00:18:56.862 00:18:56.862 ' 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:56.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.862 --rc genhtml_branch_coverage=1 00:18:56.862 --rc genhtml_function_coverage=1 00:18:56.862 --rc genhtml_legend=1 00:18:56.862 --rc geninfo_all_blocks=1 00:18:56.862 --rc geninfo_unexecuted_blocks=1 00:18:56.862 00:18:56.862 ' 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:56.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.862 --rc genhtml_branch_coverage=1 00:18:56.862 --rc genhtml_function_coverage=1 00:18:56.862 --rc genhtml_legend=1 00:18:56.862 --rc geninfo_all_blocks=1 00:18:56.862 --rc geninfo_unexecuted_blocks=1 00:18:56.862 00:18:56.862 ' 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:56.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:56.862 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:56.863 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:56.863 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:56.863 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:56.863 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:56.863 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:56.863 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.863 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:56.863 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:56.863 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:56.863 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.863 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.863 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.863 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:56.863 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:56.863 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:56.863 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.079 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:05.080 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:05.080 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:05.080 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:05.080 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:05.080 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:05.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:19:05.081 00:19:05.081 --- 10.0.0.2 ping statistics --- 00:19:05.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.081 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:05.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:19:05.081 00:19:05.081 --- 10.0.0.1 ping statistics --- 00:19:05.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.081 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=1651993 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 1651993 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1651993 ']' 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.081 [2024-11-04 12:23:38.546691] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:19:05.081 [2024-11-04 12:23:38.546750] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:05.081 [2024-11-04 12:23:38.635678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:05.081 [2024-11-04 12:23:38.687424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.081 [2024-11-04 12:23:38.687454] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.081 [2024-11-04 12:23:38.687465] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:05.081 [2024-11-04 12:23:38.687472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:05.081 [2024-11-04 12:23:38.687477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.081 [2024-11-04 12:23:38.688648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:05.081 [2024-11-04 12:23:38.688794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:05.081 [2024-11-04 12:23:38.688945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:05.081 [2024-11-04 12:23:38.688945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.081 [2024-11-04 12:23:38.836782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.081 Malloc0 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.081 [2024-11-04 12:23:38.889806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:05.081 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:05.081 { 00:19:05.081 "params": { 00:19:05.081 "name": "Nvme$subsystem", 00:19:05.081 "trtype": "$TEST_TRANSPORT", 00:19:05.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.081 "adrfam": "ipv4", 00:19:05.081 "trsvcid": "$NVMF_PORT", 00:19:05.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.081 "hdgst": ${hdgst:-false}, 00:19:05.081 "ddgst": ${ddgst:-false} 00:19:05.081 }, 00:19:05.082 "method": "bdev_nvme_attach_controller" 00:19:05.082 } 00:19:05.082 EOF 00:19:05.082 )") 00:19:05.082 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:19:05.082 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:19:05.082 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:19:05.082 12:23:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:19:05.082 "params": { 00:19:05.082 "name": "Nvme1", 00:19:05.082 "trtype": "tcp", 00:19:05.082 "traddr": "10.0.0.2", 00:19:05.082 "adrfam": "ipv4", 00:19:05.082 "trsvcid": "4420", 00:19:05.082 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.082 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:05.082 "hdgst": false, 00:19:05.082 "ddgst": false 00:19:05.082 }, 00:19:05.082 "method": "bdev_nvme_attach_controller" 00:19:05.082 }' 00:19:05.082 [2024-11-04 12:23:38.946759] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:19:05.082 [2024-11-04 12:23:38.946826] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1652023 ] 00:19:05.082 [2024-11-04 12:23:39.016857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:05.082 [2024-11-04 12:23:39.072337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.082 [2024-11-04 12:23:39.072456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.082 [2024-11-04 12:23:39.072459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.082 I/O targets: 00:19:05.082 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:05.082 00:19:05.082 00:19:05.082 CUnit - A unit testing framework for C - Version 2.1-3 00:19:05.082 http://cunit.sourceforge.net/ 00:19:05.082 00:19:05.082 00:19:05.082 Suite: bdevio tests on: Nvme1n1 00:19:05.082 Test: blockdev write read block ...passed 00:19:05.082 Test: blockdev write zeroes read block ...passed 00:19:05.082 Test: blockdev write zeroes read no split ...passed 00:19:05.082 Test: blockdev write zeroes read split ...passed 00:19:05.082 Test: blockdev write zeroes read split partial ...passed 00:19:05.082 Test: blockdev reset ...[2024-11-04 12:23:39.459933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:05.082 [2024-11-04 12:23:39.459992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af18c0 (9): Bad file descriptor 00:19:05.082 [2024-11-04 12:23:39.479413] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:05.082 passed 00:19:05.082 Test: blockdev write read 8 blocks ...passed 00:19:05.082 Test: blockdev write read size > 128k ...passed 00:19:05.082 Test: blockdev write read invalid size ...passed 00:19:05.082 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:05.082 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:05.082 Test: blockdev write read max offset ...passed 00:19:05.342 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:05.342 Test: blockdev writev readv 8 blocks ...passed 00:19:05.342 Test: blockdev writev readv 30 x 1block ...passed 00:19:05.342 Test: blockdev writev readv block ...passed 00:19:05.342 Test: blockdev writev readv size > 128k ...passed 00:19:05.342 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:05.342 Test: blockdev comparev and writev ...[2024-11-04 12:23:39.786039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.342 [2024-11-04 12:23:39.786065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:05.342 [2024-11-04 12:23:39.786076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.342 [2024-11-04 12:23:39.786082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:05.342 [2024-11-04 12:23:39.786512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.342 [2024-11-04 12:23:39.786521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:05.342 [2024-11-04 12:23:39.786530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.342 [2024-11-04 12:23:39.786536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:05.342 [2024-11-04 12:23:39.787026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.342 [2024-11-04 12:23:39.787036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:05.342 [2024-11-04 12:23:39.787046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.342 [2024-11-04 12:23:39.787051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:05.342 [2024-11-04 12:23:39.787550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.342 [2024-11-04 12:23:39.787560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:05.342 [2024-11-04 12:23:39.787569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:05.342 [2024-11-04 12:23:39.787576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:05.342 passed 00:19:05.342 Test: blockdev nvme passthru rw ...passed 00:19:05.342 Test: blockdev nvme passthru vendor specific ...[2024-11-04 12:23:39.872591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:05.342 [2024-11-04 12:23:39.872603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:05.342 [2024-11-04 12:23:39.872916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:05.342 [2024-11-04 12:23:39.872925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:05.342 [2024-11-04 12:23:39.873268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:05.342 [2024-11-04 12:23:39.873276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:05.342 [2024-11-04 12:23:39.873604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:05.342 [2024-11-04 12:23:39.873613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:05.342 passed 00:19:05.342 Test: blockdev nvme admin passthru ...passed 00:19:05.602 Test: blockdev copy ...passed 00:19:05.602 00:19:05.602 Run Summary: Type Total Ran Passed Failed Inactive 00:19:05.602 suites 1 1 n/a 0 0 00:19:05.602 tests 23 23 23 0 0 00:19:05.602 asserts 152 152 152 0 n/a 00:19:05.602 00:19:05.602 Elapsed time = 1.316 seconds 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:05.864 rmmod nvme_tcp 00:19:05.864 rmmod nvme_fabrics 00:19:05.864 rmmod nvme_keyring 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 1651993 ']' 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 1651993 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1651993 ']' 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1651993 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1651993 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1651993' 00:19:05.864 killing process with pid 1651993 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1651993 00:19:05.864 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1651993 00:19:06.438 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:06.438 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:06.438 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:06.438 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:06.438 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:06.438 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:19:06.438 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:19:06.438 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:06.438 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:06.438 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.438 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:06.438 12:23:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.349 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:08.349 00:19:08.349 real 0m11.761s 00:19:08.349 user 0m11.936s 00:19:08.349 sys 0m6.457s 00:19:08.349 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:08.349 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:08.349 ************************************ 00:19:08.349 END TEST nvmf_bdevio_no_huge 00:19:08.349 ************************************ 00:19:08.349 12:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:08.349 12:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:08.349 12:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:08.349 12:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:08.349 ************************************ 00:19:08.349 START TEST nvmf_tls 00:19:08.349 ************************************ 00:19:08.349 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:08.611 * Looking for test storage... 00:19:08.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:08.611 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:08.611 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:19:08.611 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:08.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.611 --rc genhtml_branch_coverage=1 00:19:08.611 --rc genhtml_function_coverage=1 00:19:08.611 --rc genhtml_legend=1 00:19:08.611 --rc geninfo_all_blocks=1 00:19:08.611 --rc geninfo_unexecuted_blocks=1 00:19:08.611 00:19:08.611 ' 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:08.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.611 --rc genhtml_branch_coverage=1 00:19:08.611 --rc genhtml_function_coverage=1 00:19:08.611 --rc genhtml_legend=1 00:19:08.611 --rc geninfo_all_blocks=1 00:19:08.611 --rc geninfo_unexecuted_blocks=1 00:19:08.611 00:19:08.611 ' 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:08.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.611 --rc genhtml_branch_coverage=1 00:19:08.611 --rc genhtml_function_coverage=1 00:19:08.611 --rc genhtml_legend=1 00:19:08.611 --rc geninfo_all_blocks=1 00:19:08.611 --rc geninfo_unexecuted_blocks=1 00:19:08.611 00:19:08.611 ' 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:08.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.611 --rc genhtml_branch_coverage=1 00:19:08.611 --rc genhtml_function_coverage=1 00:19:08.611 --rc genhtml_legend=1 00:19:08.611 --rc geninfo_all_blocks=1 00:19:08.611 --rc geninfo_unexecuted_blocks=1 00:19:08.611 00:19:08.611 ' 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:08.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:08.611 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:08.612 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.612 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:08.612 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:08.612 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:08.612 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.612 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:08.612 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.612 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:08.612 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:08.612 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:08.612 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.750 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:16.751 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:16.751 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:16.751 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:16.751 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:16.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:19:16.751 00:19:16.751 --- 10.0.0.2 ping statistics --- 00:19:16.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.751 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:16.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:19:16.751 00:19:16.751 --- 10.0.0.1 ping statistics --- 00:19:16.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.751 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1656642 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1656642 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1656642 ']' 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:16.751 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.751 [2024-11-04 12:23:50.501856] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:19:16.751 [2024-11-04 12:23:50.501927] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.751 [2024-11-04 12:23:50.593371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.751 [2024-11-04 12:23:50.644027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.752 [2024-11-04 12:23:50.644078] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.752 [2024-11-04 12:23:50.644087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.752 [2024-11-04 12:23:50.644095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.752 [2024-11-04 12:23:50.644101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.752 [2024-11-04 12:23:50.644870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.752 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:16.752 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:16.752 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:16.752 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:16.752 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.013 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.013 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:17.013 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:17.013 true 00:19:17.013 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:17.013 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:17.274 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:17.274 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:17.274 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:17.536 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:17.536 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:17.797 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:17.797 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:17.797 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:17.797 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:17.797 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:18.058 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:18.058 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:18.058 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:18.058 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:18.368 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:18.368 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:18.368 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:18.368 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:18.368 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:18.632 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:18.632 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:18.632 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:18.897 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:18.897 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:18.897 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:18.897 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:18.897 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:18.897 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:18.897 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:18.897 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:18.897 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:19:18.897 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:19:18.897 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:18.897 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:18.897 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:18.897 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:18.897 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:18.897 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:18.897 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:19:18.897 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:19:18.897 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:19.158 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:19.158 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:19.158 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.zwWl0rMdDr 00:19:19.158 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:19.158 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.euVQcnCXlV 00:19:19.158 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:19.158 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:19.158 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.zwWl0rMdDr 00:19:19.158 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.euVQcnCXlV 00:19:19.158 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:19.158 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:19.418 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.zwWl0rMdDr 00:19:19.418 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zwWl0rMdDr 00:19:19.418 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:19.678 [2024-11-04 12:23:54.094748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.678 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:19.938 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:19.938 [2024-11-04 12:23:54.431556] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:19.938 [2024-11-04 12:23:54.431762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.938 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:20.197 malloc0 00:19:20.197 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:20.457 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zwWl0rMdDr 00:19:20.457 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:20.718 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.zwWl0rMdDr 00:19:30.711 Initializing NVMe Controllers 00:19:30.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:30.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:30.711 Initialization complete. Launching workers. 00:19:30.711 ======================================================== 00:19:30.711 Latency(us) 00:19:30.711 Device Information : IOPS MiB/s Average min max 00:19:30.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18765.61 73.30 3410.47 1138.38 4202.32 00:19:30.711 ======================================================== 00:19:30.711 Total : 18765.61 73.30 3410.47 1138.38 4202.32 00:19:30.711 00:19:30.711 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zwWl0rMdDr 00:19:30.711 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:30.711 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:30.711 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:30.711 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zwWl0rMdDr 00:19:30.711 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.711 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1659528 00:19:30.711 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.711 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1659528 /var/tmp/bdevperf.sock 00:19:30.711 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.711 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1659528 ']' 00:19:30.711 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.711 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:30.711 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.711 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:30.711 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.972 [2024-11-04 12:24:05.288115] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:19:30.972 [2024-11-04 12:24:05.288173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1659528 ] 00:19:30.972 [2024-11-04 12:24:05.338823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.972 [2024-11-04 12:24:05.367858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.972 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:30.972 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:30.972 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zwWl0rMdDr 00:19:31.232 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:31.232 [2024-11-04 12:24:05.752538] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.494 TLSTESTn1 00:19:31.494 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:31.494 Running I/O for 10 seconds... 00:19:33.377 6162.00 IOPS, 24.07 MiB/s [2024-11-04T11:24:09.332Z] 6194.00 IOPS, 24.20 MiB/s [2024-11-04T11:24:10.271Z] 5978.33 IOPS, 23.35 MiB/s [2024-11-04T11:24:11.213Z] 5919.50 IOPS, 23.12 MiB/s [2024-11-04T11:24:12.155Z] 5928.80 IOPS, 23.16 MiB/s [2024-11-04T11:24:13.098Z] 6023.50 IOPS, 23.53 MiB/s [2024-11-04T11:24:14.041Z] 5954.57 IOPS, 23.26 MiB/s [2024-11-04T11:24:14.983Z] 5925.88 IOPS, 23.15 MiB/s [2024-11-04T11:24:16.370Z] 5938.22 IOPS, 23.20 MiB/s [2024-11-04T11:24:16.370Z] 5941.10 IOPS, 23.21 MiB/s 00:19:41.800 Latency(us) 00:19:41.800 [2024-11-04T11:24:16.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.800 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:41.800 Verification LBA range: start 0x0 length 0x2000 00:19:41.800 TLSTESTn1 : 10.02 5943.45 23.22 0.00 0.00 21501.53 6253.23 23374.51 00:19:41.800 [2024-11-04T11:24:16.370Z] =================================================================================================================== 00:19:41.800 [2024-11-04T11:24:16.370Z] Total : 5943.45 23.22 0.00 0.00 21501.53 6253.23 23374.51 00:19:41.800 { 00:19:41.800 "results": [ 00:19:41.800 { 00:19:41.800 "job": "TLSTESTn1", 00:19:41.800 "core_mask": "0x4", 00:19:41.800 "workload": "verify", 00:19:41.800 "status": "finished", 00:19:41.801 "verify_range": { 00:19:41.801 "start": 0, 00:19:41.801 "length": 8192 00:19:41.801 }, 00:19:41.801 "queue_depth": 128, 00:19:41.801 "io_size": 4096, 00:19:41.801 "runtime": 10.017247, 00:19:41.801 "iops": 5943.449332935486, 00:19:41.801 "mibps": 23.216598956779244, 00:19:41.801 "io_failed": 0, 00:19:41.801 "io_timeout": 0, 00:19:41.801 "avg_latency_us": 21501.528103868182, 00:19:41.801 "min_latency_us": 6253.2266666666665, 00:19:41.801 "max_latency_us": 23374.506666666668 00:19:41.801 } 00:19:41.801 ], 00:19:41.801 "core_count": 1 00:19:41.801 } 00:19:41.801 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:41.801 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1659528 00:19:41.801 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1659528 ']' 00:19:41.801 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1659528 00:19:41.801 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:41.801 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:41.801 12:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1659528 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1659528' 00:19:41.801 killing process with pid 1659528 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1659528 00:19:41.801 Received shutdown signal, test time was about 10.000000 seconds 00:19:41.801 00:19:41.801 Latency(us) 00:19:41.801 [2024-11-04T11:24:16.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.801 [2024-11-04T11:24:16.371Z] =================================================================================================================== 00:19:41.801 [2024-11-04T11:24:16.371Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1659528 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.euVQcnCXlV 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.euVQcnCXlV 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.euVQcnCXlV 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.euVQcnCXlV 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1662152 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1662152 /var/tmp/bdevperf.sock 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1662152 ']' 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.801 [2024-11-04 12:24:16.217696] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:19:41.801 [2024-11-04 12:24:16.217770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662152 ] 00:19:41.801 [2024-11-04 12:24:16.268564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.801 [2024-11-04 12:24:16.297907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:41.801 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.euVQcnCXlV 00:19:42.063 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:42.324 [2024-11-04 12:24:16.690560] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.324 [2024-11-04 12:24:16.697626] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:42.324 [2024-11-04 12:24:16.697682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd49c70 (107): Transport endpoint is not connected 00:19:42.325 [2024-11-04 12:24:16.698670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd49c70 (9): Bad file descriptor 00:19:42.325 [2024-11-04 12:24:16.699671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:42.325 [2024-11-04 12:24:16.699679] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:42.325 [2024-11-04 12:24:16.699684] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:42.325 [2024-11-04 12:24:16.699692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:42.325 request: 00:19:42.325 { 00:19:42.325 "name": "TLSTEST", 00:19:42.325 "trtype": "tcp", 00:19:42.325 "traddr": "10.0.0.2", 00:19:42.325 "adrfam": "ipv4", 00:19:42.325 "trsvcid": "4420", 00:19:42.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:42.325 "prchk_reftag": false, 00:19:42.325 "prchk_guard": false, 00:19:42.325 "hdgst": false, 00:19:42.325 "ddgst": false, 00:19:42.325 "psk": "key0", 00:19:42.325 "allow_unrecognized_csi": false, 00:19:42.325 "method": "bdev_nvme_attach_controller", 00:19:42.325 "req_id": 1 00:19:42.325 } 00:19:42.325 Got JSON-RPC error response 00:19:42.325 response: 00:19:42.325 { 00:19:42.325 "code": -5, 00:19:42.325 "message": "Input/output error" 00:19:42.325 } 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1662152 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1662152 ']' 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1662152 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1662152 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1662152' 00:19:42.325 killing process with pid 1662152 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1662152 00:19:42.325 Received shutdown signal, test time was about 10.000000 seconds 00:19:42.325 00:19:42.325 Latency(us) 00:19:42.325 [2024-11-04T11:24:16.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.325 [2024-11-04T11:24:16.895Z] =================================================================================================================== 00:19:42.325 [2024-11-04T11:24:16.895Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1662152 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zwWl0rMdDr 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zwWl0rMdDr 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zwWl0rMdDr 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zwWl0rMdDr 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1662343 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1662343 /var/tmp/bdevperf.sock 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1662343 ']' 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:42.325 12:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.586 [2024-11-04 12:24:16.945410] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:19:42.586 [2024-11-04 12:24:16.945470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662343 ] 00:19:42.586 [2024-11-04 12:24:16.995110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.586 [2024-11-04 12:24:17.023842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.586 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:42.586 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:42.586 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zwWl0rMdDr 00:19:42.847 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:42.847 [2024-11-04 12:24:17.400342] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.847 [2024-11-04 12:24:17.409269] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:42.847 [2024-11-04 12:24:17.409289] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:42.847 [2024-11-04 12:24:17.409308] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:42.847 [2024-11-04 12:24:17.409537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175dc70 (107): Transport endpoint is not connected 00:19:42.847 [2024-11-04 12:24:17.410532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175dc70 (9): Bad file descriptor 00:19:42.847 [2024-11-04 12:24:17.411534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:42.847 [2024-11-04 12:24:17.411542] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:42.847 [2024-11-04 12:24:17.411548] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:42.847 [2024-11-04 12:24:17.411556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:42.847 request: 00:19:42.847 { 00:19:42.847 "name": "TLSTEST", 00:19:42.847 "trtype": "tcp", 00:19:42.847 "traddr": "10.0.0.2", 00:19:42.847 "adrfam": "ipv4", 00:19:42.847 "trsvcid": "4420", 00:19:42.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.847 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:42.847 "prchk_reftag": false, 00:19:42.847 "prchk_guard": false, 00:19:42.847 "hdgst": false, 00:19:42.847 "ddgst": false, 00:19:42.847 "psk": "key0", 00:19:42.847 "allow_unrecognized_csi": false, 00:19:42.847 "method": "bdev_nvme_attach_controller", 00:19:42.847 "req_id": 1 00:19:42.847 } 00:19:42.847 Got JSON-RPC error response 00:19:42.847 response: 00:19:42.847 { 00:19:42.847 "code": -5, 00:19:42.847 "message": "Input/output error" 00:19:42.847 } 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1662343 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1662343 ']' 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1662343 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1662343 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1662343' 00:19:43.109 killing process with pid 1662343 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1662343 00:19:43.109 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.109 00:19:43.109 Latency(us) 00:19:43.109 [2024-11-04T11:24:17.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.109 [2024-11-04T11:24:17.679Z] =================================================================================================================== 00:19:43.109 [2024-11-04T11:24:17.679Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1662343 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zwWl0rMdDr 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zwWl0rMdDr 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zwWl0rMdDr 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zwWl0rMdDr 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1662367 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1662367 /var/tmp/bdevperf.sock 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1662367 ']' 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:43.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:43.109 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.109 [2024-11-04 12:24:17.642523] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:19:43.109 [2024-11-04 12:24:17.642582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662367 ] 00:19:43.371 [2024-11-04 12:24:17.693431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.371 [2024-11-04 12:24:17.722299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.371 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:43.371 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:43.371 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zwWl0rMdDr 00:19:43.633 12:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:43.633 [2024-11-04 12:24:18.115054] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:43.633 [2024-11-04 12:24:18.121000] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:43.633 [2024-11-04 12:24:18.121019] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:43.633 [2024-11-04 12:24:18.121038] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:43.633 [2024-11-04 12:24:18.121264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1097c70 (107): Transport endpoint is not connected 00:19:43.633 [2024-11-04 12:24:18.122260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1097c70 (9): Bad file descriptor 00:19:43.633 [2024-11-04 12:24:18.123261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:43.633 [2024-11-04 12:24:18.123270] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:43.633 [2024-11-04 12:24:18.123275] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:43.633 [2024-11-04 12:24:18.123284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:43.633 request: 00:19:43.633 { 00:19:43.633 "name": "TLSTEST", 00:19:43.633 "trtype": "tcp", 00:19:43.633 "traddr": "10.0.0.2", 00:19:43.633 "adrfam": "ipv4", 00:19:43.633 "trsvcid": "4420", 00:19:43.633 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:43.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.633 "prchk_reftag": false, 00:19:43.633 "prchk_guard": false, 00:19:43.633 "hdgst": false, 00:19:43.633 "ddgst": false, 00:19:43.633 "psk": "key0", 00:19:43.633 "allow_unrecognized_csi": false, 00:19:43.633 "method": "bdev_nvme_attach_controller", 00:19:43.633 "req_id": 1 00:19:43.633 } 00:19:43.633 Got JSON-RPC error response 00:19:43.633 response: 00:19:43.633 { 00:19:43.633 "code": -5, 00:19:43.633 "message": "Input/output error" 00:19:43.633 } 00:19:43.633 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1662367 00:19:43.633 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1662367 ']' 00:19:43.633 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1662367 00:19:43.633 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:43.633 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:43.633 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1662367 00:19:43.633 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:43.633 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:43.633 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1662367' 00:19:43.633 killing process with pid 1662367 00:19:43.633 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1662367 00:19:43.633 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.633 00:19:43.633 Latency(us) 00:19:43.633 [2024-11-04T11:24:18.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.633 [2024-11-04T11:24:18.203Z] =================================================================================================================== 00:19:43.633 [2024-11-04T11:24:18.203Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:43.633 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1662367 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1662691 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1662691 /var/tmp/bdevperf.sock 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1662691 ']' 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:43.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:43.894 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.894 [2024-11-04 12:24:18.340530] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:19:43.894 [2024-11-04 12:24:18.340587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1662691 ] 00:19:43.894 [2024-11-04 12:24:18.390742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.894 [2024-11-04 12:24:18.418712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.155 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:44.155 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:44.155 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:44.155 [2024-11-04 12:24:18.650806] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:44.155 [2024-11-04 12:24:18.650828] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:44.155 request: 00:19:44.155 { 00:19:44.155 "name": "key0", 00:19:44.155 "path": "", 00:19:44.155 "method": "keyring_file_add_key", 00:19:44.155 "req_id": 1 00:19:44.155 } 00:19:44.155 Got JSON-RPC error response 00:19:44.155 response: 00:19:44.155 { 00:19:44.155 "code": -1, 00:19:44.155 "message": "Operation not permitted" 00:19:44.155 } 00:19:44.155 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:44.416 [2024-11-04 12:24:18.819311] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:44.416 [2024-11-04 12:24:18.819337] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:44.416 request: 00:19:44.416 { 00:19:44.416 "name": "TLSTEST", 00:19:44.416 "trtype": "tcp", 00:19:44.416 "traddr": "10.0.0.2", 00:19:44.416 "adrfam": "ipv4", 00:19:44.416 "trsvcid": "4420", 00:19:44.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:44.416 "prchk_reftag": false, 00:19:44.416 "prchk_guard": false, 00:19:44.416 "hdgst": false, 00:19:44.416 "ddgst": false, 00:19:44.416 "psk": "key0", 00:19:44.416 "allow_unrecognized_csi": false, 00:19:44.416 "method": "bdev_nvme_attach_controller", 00:19:44.416 "req_id": 1 00:19:44.416 } 00:19:44.416 Got JSON-RPC error response 00:19:44.416 response: 00:19:44.416 { 00:19:44.416 "code": -126, 00:19:44.416 "message": "Required key not available" 00:19:44.416 } 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1662691 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1662691 ']' 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1662691 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1662691 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1662691' 00:19:44.416 killing process with pid 1662691 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1662691 00:19:44.416 Received shutdown signal, test time was about 10.000000 seconds 00:19:44.416 00:19:44.416 Latency(us) 00:19:44.416 [2024-11-04T11:24:18.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.416 [2024-11-04T11:24:18.986Z] =================================================================================================================== 00:19:44.416 [2024-11-04T11:24:18.986Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1662691 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1656642 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1656642 ']' 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1656642 00:19:44.416 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:44.677 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:44.677 12:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1656642 00:19:44.677 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:44.677 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:44.677 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1656642' 00:19:44.677 killing process with pid 1656642 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1656642 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1656642 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.GkQqcE62Uk 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.GkQqcE62Uk 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1662725 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1662725 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1662725 ']' 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:44.678 12:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.939 [2024-11-04 12:24:19.260254] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:19:44.939 [2024-11-04 12:24:19.260318] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.939 [2024-11-04 12:24:19.344398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.939 [2024-11-04 12:24:19.374302] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.939 [2024-11-04 12:24:19.374334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.939 [2024-11-04 12:24:19.374339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.939 [2024-11-04 12:24:19.374344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.939 [2024-11-04 12:24:19.374349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.939 [2024-11-04 12:24:19.374827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.511 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:45.511 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:45.511 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:45.511 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:45.511 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.772 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.772 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.GkQqcE62Uk 00:19:45.772 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.GkQqcE62Uk 00:19:45.772 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:45.772 [2024-11-04 12:24:20.246598] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.772 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:46.177 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:46.177 [2024-11-04 12:24:20.567379] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:46.177 [2024-11-04 12:24:20.567567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.177 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:46.177 malloc0 00:19:46.177 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:46.438 12:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.GkQqcE62Uk 00:19:46.700 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:46.700 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GkQqcE62Uk 00:19:46.700 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:46.700 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:46.700 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:46.700 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GkQqcE62Uk 00:19:46.700 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:46.700 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:46.700 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1663168 00:19:46.700 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:46.700 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1663168 /var/tmp/bdevperf.sock 00:19:46.700 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1663168 ']' 00:19:46.700 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.700 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:46.700 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.700 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:46.700 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.961 [2024-11-04 12:24:21.270333] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:19:46.961 [2024-11-04 12:24:21.270387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663168 ] 00:19:46.961 [2024-11-04 12:24:21.321581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.961 [2024-11-04 12:24:21.350881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.961 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:46.961 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:46.961 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GkQqcE62Uk 00:19:47.222 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:47.222 [2024-11-04 12:24:21.751893] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:47.482 TLSTESTn1 00:19:47.482 12:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:47.482 Running I/O for 10 seconds... 00:19:49.810 5000.00 IOPS, 19.53 MiB/s [2024-11-04T11:24:24.951Z] 5393.00 IOPS, 21.07 MiB/s [2024-11-04T11:24:26.336Z] 5348.00 IOPS, 20.89 MiB/s [2024-11-04T11:24:27.278Z] 5388.00 IOPS, 21.05 MiB/s [2024-11-04T11:24:28.221Z] 5398.60 IOPS, 21.09 MiB/s [2024-11-04T11:24:29.162Z] 5476.33 IOPS, 21.39 MiB/s [2024-11-04T11:24:30.114Z] 5481.71 IOPS, 21.41 MiB/s [2024-11-04T11:24:31.169Z] 5444.62 IOPS, 21.27 MiB/s [2024-11-04T11:24:32.111Z] 5509.00 IOPS, 21.52 MiB/s [2024-11-04T11:24:32.111Z] 5522.40 IOPS, 21.57 MiB/s 00:19:57.541 Latency(us) 00:19:57.541 [2024-11-04T11:24:32.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.541 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:57.541 Verification LBA range: start 0x0 length 0x2000 00:19:57.541 TLSTESTn1 : 10.05 5507.60 21.51 0.00 0.00 23172.91 5898.24 69905.07 00:19:57.541 [2024-11-04T11:24:32.111Z] =================================================================================================================== 00:19:57.541 [2024-11-04T11:24:32.111Z] Total : 5507.60 21.51 0.00 0.00 23172.91 5898.24 69905.07 00:19:57.541 { 00:19:57.541 "results": [ 00:19:57.541 { 00:19:57.541 "job": "TLSTESTn1", 00:19:57.541 "core_mask": "0x4", 00:19:57.541 "workload": "verify", 00:19:57.541 "status": "finished", 00:19:57.541 "verify_range": { 00:19:57.541 "start": 0, 00:19:57.541 "length": 8192 00:19:57.541 }, 00:19:57.541 "queue_depth": 128, 00:19:57.541 "io_size": 4096, 00:19:57.541 "runtime": 10.049936, 00:19:57.541 "iops": 5507.597262310924, 00:19:57.541 "mibps": 21.514051805902046, 00:19:57.541 "io_failed": 0, 00:19:57.541 "io_timeout": 0, 00:19:57.541 "avg_latency_us": 23172.909160328327, 00:19:57.541 "min_latency_us": 5898.24, 00:19:57.541 "max_latency_us": 69905.06666666667 00:19:57.541 } 00:19:57.541 ], 00:19:57.541 "core_count": 1 00:19:57.541 } 00:19:57.542 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:57.542 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1663168 00:19:57.542 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1663168 ']' 00:19:57.542 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1663168 00:19:57.542 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:57.542 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:57.542 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1663168 00:19:57.542 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:57.542 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:57.542 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1663168' 00:19:57.542 killing process with pid 1663168 00:19:57.542 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1663168 00:19:57.542 Received shutdown signal, test time was about 10.000000 seconds 00:19:57.542 00:19:57.542 Latency(us) 00:19:57.542 [2024-11-04T11:24:32.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.542 [2024-11-04T11:24:32.112Z] =================================================================================================================== 00:19:57.542 [2024-11-04T11:24:32.112Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:57.542 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1663168 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.GkQqcE62Uk 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GkQqcE62Uk 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GkQqcE62Uk 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GkQqcE62Uk 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GkQqcE62Uk 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1665427 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1665427 /var/tmp/bdevperf.sock 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1665427 ']' 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:57.803 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.803 [2024-11-04 12:24:32.264711] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:19:57.803 [2024-11-04 12:24:32.264773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1665427 ] 00:19:57.803 [2024-11-04 12:24:32.316092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.803 [2024-11-04 12:24:32.344092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.063 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:58.064 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:58.064 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GkQqcE62Uk 00:19:58.064 [2024-11-04 12:24:32.564385] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.GkQqcE62Uk': 0100666 00:19:58.064 [2024-11-04 12:24:32.564411] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:58.064 request: 00:19:58.064 { 00:19:58.064 "name": "key0", 00:19:58.064 "path": "/tmp/tmp.GkQqcE62Uk", 00:19:58.064 "method": "keyring_file_add_key", 00:19:58.064 "req_id": 1 00:19:58.064 } 00:19:58.064 Got JSON-RPC error response 00:19:58.064 response: 00:19:58.064 { 00:19:58.064 "code": -1, 00:19:58.064 "message": "Operation not permitted" 00:19:58.064 } 00:19:58.064 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:58.325 [2024-11-04 12:24:32.752934] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.325 [2024-11-04 12:24:32.752960] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:58.325 request: 00:19:58.325 { 00:19:58.325 "name": "TLSTEST", 00:19:58.325 "trtype": "tcp", 00:19:58.325 "traddr": "10.0.0.2", 00:19:58.325 "adrfam": "ipv4", 00:19:58.325 "trsvcid": "4420", 00:19:58.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.325 "prchk_reftag": false, 00:19:58.325 "prchk_guard": false, 00:19:58.325 "hdgst": false, 00:19:58.325 "ddgst": false, 00:19:58.325 "psk": "key0", 00:19:58.325 "allow_unrecognized_csi": false, 00:19:58.325 "method": "bdev_nvme_attach_controller", 00:19:58.325 "req_id": 1 00:19:58.325 } 00:19:58.325 Got JSON-RPC error response 00:19:58.325 response: 00:19:58.325 { 00:19:58.325 "code": -126, 00:19:58.325 "message": "Required key not available" 00:19:58.325 } 00:19:58.325 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1665427 00:19:58.325 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1665427 ']' 00:19:58.325 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1665427 00:19:58.325 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:58.325 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:58.325 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1665427 00:19:58.325 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:58.325 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:58.325 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1665427' 00:19:58.325 killing process with pid 1665427 00:19:58.325 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1665427 00:19:58.325 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.325 00:19:58.325 Latency(us) 00:19:58.325 [2024-11-04T11:24:32.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.325 [2024-11-04T11:24:32.895Z] =================================================================================================================== 00:19:58.325 [2024-11-04T11:24:32.895Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:58.325 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1665427 00:19:58.586 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:58.586 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:58.586 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:58.586 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:58.586 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:58.586 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1662725 00:19:58.586 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1662725 ']' 00:19:58.586 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1662725 00:19:58.586 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:58.586 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:58.586 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1662725 00:19:58.586 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:58.586 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:58.586 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1662725' 00:19:58.586 killing process with pid 1662725 00:19:58.586 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1662725 00:19:58.586 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1662725 00:19:58.586 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:58.586 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:58.586 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:58.586 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.586 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1665445 00:19:58.586 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1665445 00:19:58.586 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:58.586 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1665445 ']' 00:19:58.586 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.586 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:58.587 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.587 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:58.587 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.848 [2024-11-04 12:24:33.172141] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:19:58.848 [2024-11-04 12:24:33.172219] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.848 [2024-11-04 12:24:33.258277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.848 [2024-11-04 12:24:33.292326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.848 [2024-11-04 12:24:33.292361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.848 [2024-11-04 12:24:33.292367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.848 [2024-11-04 12:24:33.292372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.848 [2024-11-04 12:24:33.292376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.848 [2024-11-04 12:24:33.292902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.419 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:59.419 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:59.419 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:59.419 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:59.419 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.680 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.680 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.GkQqcE62Uk 00:19:59.680 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:59.680 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.GkQqcE62Uk 00:19:59.680 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:59.680 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:59.680 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:59.680 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:59.680 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.GkQqcE62Uk 00:19:59.680 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.GkQqcE62Uk 00:19:59.680 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:59.680 [2024-11-04 12:24:34.158893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.680 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:59.941 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:59.941 [2024-11-04 12:24:34.471659] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:59.941 [2024-11-04 12:24:34.471844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:59.941 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:00.202 malloc0 00:20:00.202 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:00.463 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.GkQqcE62Uk 00:20:00.463 [2024-11-04 12:24:34.954514] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.GkQqcE62Uk': 0100666 00:20:00.463 [2024-11-04 12:24:34.954532] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:00.463 request: 00:20:00.463 { 00:20:00.463 "name": "key0", 00:20:00.463 "path": "/tmp/tmp.GkQqcE62Uk", 00:20:00.463 "method": "keyring_file_add_key", 00:20:00.463 "req_id": 1 00:20:00.463 } 00:20:00.463 Got JSON-RPC error response 00:20:00.463 response: 00:20:00.463 { 00:20:00.463 "code": -1, 00:20:00.463 "message": "Operation not permitted" 00:20:00.463 } 00:20:00.463 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:00.724 [2024-11-04 12:24:35.122948] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:00.724 [2024-11-04 12:24:35.122973] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:00.724 request: 00:20:00.724 { 00:20:00.724 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.724 "host": "nqn.2016-06.io.spdk:host1", 00:20:00.724 "psk": "key0", 00:20:00.724 "method": "nvmf_subsystem_add_host", 00:20:00.724 "req_id": 1 00:20:00.724 } 00:20:00.724 Got JSON-RPC error response 00:20:00.724 response: 00:20:00.724 { 00:20:00.725 "code": -32603, 00:20:00.725 "message": "Internal error" 00:20:00.725 } 00:20:00.725 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:00.725 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:00.725 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:00.725 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:00.725 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1665445 00:20:00.725 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1665445 ']' 00:20:00.725 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1665445 00:20:00.725 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:00.725 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:00.725 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1665445 00:20:00.725 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:00.725 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:00.725 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1665445' 00:20:00.725 killing process with pid 1665445 00:20:00.725 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1665445 00:20:00.725 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1665445 00:20:00.985 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.GkQqcE62Uk 00:20:00.985 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:00.985 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:00.985 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:00.985 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.985 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:00.985 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1666032 00:20:00.985 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1666032 00:20:00.985 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1666032 ']' 00:20:00.985 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.985 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:00.985 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.986 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:00.986 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.986 [2024-11-04 12:24:35.358561] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:20:00.986 [2024-11-04 12:24:35.358617] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.986 [2024-11-04 12:24:35.440046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.986 [2024-11-04 12:24:35.468482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.986 [2024-11-04 12:24:35.468509] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.986 [2024-11-04 12:24:35.468516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.986 [2024-11-04 12:24:35.468521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.986 [2024-11-04 12:24:35.468525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.986 [2024-11-04 12:24:35.468973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.986 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:00.986 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:00.986 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:00.986 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:00.986 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.246 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.246 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.GkQqcE62Uk 00:20:01.246 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.GkQqcE62Uk 00:20:01.246 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:01.246 [2024-11-04 12:24:35.734681] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.246 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:01.506 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:01.506 [2024-11-04 12:24:36.055462] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:01.506 [2024-11-04 12:24:36.055652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.506 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:01.767 malloc0 00:20:01.767 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:02.028 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.GkQqcE62Uk 00:20:02.028 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:02.289 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:02.289 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1666209 00:20:02.289 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:02.289 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1666209 /var/tmp/bdevperf.sock 00:20:02.289 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1666209 ']' 00:20:02.289 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.289 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:02.289 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.289 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:02.289 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.289 [2024-11-04 12:24:36.707015] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:20:02.289 [2024-11-04 12:24:36.707067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666209 ] 00:20:02.289 [2024-11-04 12:24:36.761474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.289 [2024-11-04 12:24:36.790290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.289 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:02.289 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:02.290 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GkQqcE62Uk 00:20:02.550 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:02.810 [2024-11-04 12:24:37.191401] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.810 TLSTESTn1 00:20:02.810 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:03.070 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:03.070 "subsystems": [ 00:20:03.070 { 00:20:03.070 "subsystem": "keyring", 00:20:03.070 "config": [ 00:20:03.070 { 00:20:03.070 "method": "keyring_file_add_key", 00:20:03.070 "params": { 00:20:03.070 "name": "key0", 00:20:03.070 "path": "/tmp/tmp.GkQqcE62Uk" 00:20:03.070 } 00:20:03.070 } 00:20:03.070 ] 00:20:03.070 }, 00:20:03.070 { 00:20:03.070 "subsystem": "iobuf", 00:20:03.070 "config": [ 00:20:03.070 { 00:20:03.070 "method": "iobuf_set_options", 00:20:03.070 "params": { 00:20:03.070 "small_pool_count": 8192, 00:20:03.070 "large_pool_count": 1024, 00:20:03.070 "small_bufsize": 8192, 00:20:03.070 "large_bufsize": 135168 00:20:03.070 } 00:20:03.070 } 00:20:03.070 ] 00:20:03.070 }, 00:20:03.070 { 00:20:03.070 "subsystem": "sock", 00:20:03.070 "config": [ 00:20:03.070 { 00:20:03.070 "method": "sock_set_default_impl", 00:20:03.070 "params": { 00:20:03.070 "impl_name": "posix" 00:20:03.070 } 00:20:03.070 }, 00:20:03.070 { 00:20:03.070 "method": "sock_impl_set_options", 00:20:03.070 "params": { 00:20:03.070 "impl_name": "ssl", 00:20:03.070 "recv_buf_size": 4096, 00:20:03.070 "send_buf_size": 4096, 00:20:03.070 "enable_recv_pipe": true, 00:20:03.070 "enable_quickack": false, 00:20:03.070 "enable_placement_id": 0, 00:20:03.070 "enable_zerocopy_send_server": true, 00:20:03.070 "enable_zerocopy_send_client": false, 00:20:03.070 "zerocopy_threshold": 0, 00:20:03.070 "tls_version": 0, 00:20:03.070 "enable_ktls": false 00:20:03.070 } 00:20:03.070 }, 00:20:03.070 { 00:20:03.070 "method": "sock_impl_set_options", 00:20:03.070 "params": { 00:20:03.070 "impl_name": "posix", 00:20:03.070 "recv_buf_size": 2097152, 00:20:03.070 "send_buf_size": 2097152, 00:20:03.070 "enable_recv_pipe": true, 00:20:03.070 "enable_quickack": false, 00:20:03.070 "enable_placement_id": 0, 00:20:03.070 "enable_zerocopy_send_server": true, 00:20:03.070 "enable_zerocopy_send_client": false, 00:20:03.070 "zerocopy_threshold": 0, 00:20:03.070 "tls_version": 0, 00:20:03.070 "enable_ktls": false 00:20:03.070 } 00:20:03.070 } 00:20:03.070 ] 00:20:03.070 }, 00:20:03.070 { 00:20:03.070 "subsystem": "vmd", 00:20:03.070 "config": [] 00:20:03.070 }, 00:20:03.070 { 00:20:03.070 "subsystem": "accel", 00:20:03.070 "config": [ 00:20:03.070 { 00:20:03.070 "method": "accel_set_options", 00:20:03.070 "params": { 00:20:03.070 "small_cache_size": 128, 00:20:03.070 "large_cache_size": 16, 00:20:03.070 "task_count": 2048, 00:20:03.070 "sequence_count": 2048, 00:20:03.070 "buf_count": 2048 00:20:03.070 } 00:20:03.070 } 00:20:03.070 ] 00:20:03.070 }, 00:20:03.070 { 00:20:03.070 "subsystem": "bdev", 00:20:03.070 "config": [ 00:20:03.070 { 00:20:03.070 "method": "bdev_set_options", 00:20:03.070 "params": { 00:20:03.070 "bdev_io_pool_size": 65535, 00:20:03.070 "bdev_io_cache_size": 256, 00:20:03.070 "bdev_auto_examine": true, 00:20:03.070 "iobuf_small_cache_size": 128, 00:20:03.070 "iobuf_large_cache_size": 16 00:20:03.070 } 00:20:03.070 }, 00:20:03.070 { 00:20:03.071 "method": "bdev_raid_set_options", 00:20:03.071 "params": { 00:20:03.071 "process_window_size_kb": 1024, 00:20:03.071 "process_max_bandwidth_mb_sec": 0 00:20:03.071 } 00:20:03.071 }, 00:20:03.071 { 00:20:03.071 "method": "bdev_iscsi_set_options", 00:20:03.071 "params": { 00:20:03.071 "timeout_sec": 30 00:20:03.071 } 00:20:03.071 }, 00:20:03.071 { 00:20:03.071 "method": "bdev_nvme_set_options", 00:20:03.071 "params": { 00:20:03.071 "action_on_timeout": "none", 00:20:03.071 "timeout_us": 0, 00:20:03.071 "timeout_admin_us": 0, 00:20:03.071 "keep_alive_timeout_ms": 10000, 00:20:03.071 "arbitration_burst": 0, 00:20:03.071 "low_priority_weight": 0, 00:20:03.071 "medium_priority_weight": 0, 00:20:03.071 "high_priority_weight": 0, 00:20:03.071 "nvme_adminq_poll_period_us": 10000, 00:20:03.071 "nvme_ioq_poll_period_us": 0, 00:20:03.071 "io_queue_requests": 0, 00:20:03.071 "delay_cmd_submit": true, 00:20:03.071 "transport_retry_count": 4, 00:20:03.071 "bdev_retry_count": 3, 00:20:03.071 "transport_ack_timeout": 0, 00:20:03.071 "ctrlr_loss_timeout_sec": 0, 00:20:03.071 "reconnect_delay_sec": 0, 00:20:03.071 "fast_io_fail_timeout_sec": 0, 00:20:03.071 "disable_auto_failback": false, 00:20:03.071 "generate_uuids": false, 00:20:03.071 "transport_tos": 0, 00:20:03.071 "nvme_error_stat": false, 00:20:03.071 "rdma_srq_size": 0, 00:20:03.071 "io_path_stat": false, 00:20:03.071 "allow_accel_sequence": false, 00:20:03.071 "rdma_max_cq_size": 0, 00:20:03.071 "rdma_cm_event_timeout_ms": 0, 00:20:03.071 "dhchap_digests": [ 00:20:03.071 "sha256", 00:20:03.071 "sha384", 00:20:03.071 "sha512" 00:20:03.071 ], 00:20:03.071 "dhchap_dhgroups": [ 00:20:03.071 "null", 00:20:03.071 "ffdhe2048", 00:20:03.071 "ffdhe3072", 00:20:03.071 "ffdhe4096", 00:20:03.071 "ffdhe6144", 00:20:03.071 "ffdhe8192" 00:20:03.071 ] 00:20:03.071 } 00:20:03.071 }, 00:20:03.071 { 00:20:03.071 "method": "bdev_nvme_set_hotplug", 00:20:03.071 "params": { 00:20:03.071 "period_us": 100000, 00:20:03.071 "enable": false 00:20:03.071 } 00:20:03.071 }, 00:20:03.071 { 00:20:03.071 "method": "bdev_malloc_create", 00:20:03.071 "params": { 00:20:03.071 "name": "malloc0", 00:20:03.071 "num_blocks": 8192, 00:20:03.071 "block_size": 4096, 00:20:03.071 "physical_block_size": 4096, 00:20:03.071 "uuid": "0554a3d7-7336-47aa-9f59-ae62b3be447d", 00:20:03.071 "optimal_io_boundary": 0, 00:20:03.071 "md_size": 0, 00:20:03.071 "dif_type": 0, 00:20:03.071 "dif_is_head_of_md": false, 00:20:03.071 "dif_pi_format": 0 00:20:03.071 } 00:20:03.071 }, 00:20:03.071 { 00:20:03.071 "method": "bdev_wait_for_examine" 00:20:03.071 } 00:20:03.071 ] 00:20:03.071 }, 00:20:03.071 { 00:20:03.071 "subsystem": "nbd", 00:20:03.071 "config": [] 00:20:03.071 }, 00:20:03.071 { 00:20:03.071 "subsystem": "scheduler", 00:20:03.071 "config": [ 00:20:03.071 { 00:20:03.071 "method": "framework_set_scheduler", 00:20:03.071 "params": { 00:20:03.071 "name": "static" 00:20:03.071 } 00:20:03.071 } 00:20:03.071 ] 00:20:03.071 }, 00:20:03.071 { 00:20:03.071 "subsystem": "nvmf", 00:20:03.071 "config": [ 00:20:03.071 { 00:20:03.071 "method": "nvmf_set_config", 00:20:03.071 "params": { 00:20:03.071 "discovery_filter": "match_any", 00:20:03.071 "admin_cmd_passthru": { 00:20:03.071 "identify_ctrlr": false 00:20:03.071 }, 00:20:03.071 "dhchap_digests": [ 00:20:03.071 "sha256", 00:20:03.071 "sha384", 00:20:03.071 "sha512" 00:20:03.071 ], 00:20:03.071 "dhchap_dhgroups": [ 00:20:03.071 "null", 00:20:03.071 "ffdhe2048", 00:20:03.071 "ffdhe3072", 00:20:03.071 "ffdhe4096", 00:20:03.071 "ffdhe6144", 00:20:03.071 "ffdhe8192" 00:20:03.071 ] 00:20:03.071 } 00:20:03.071 }, 00:20:03.071 { 00:20:03.071 "method": "nvmf_set_max_subsystems", 00:20:03.071 "params": { 00:20:03.071 "max_subsystems": 1024 00:20:03.071 } 00:20:03.071 }, 00:20:03.071 { 00:20:03.071 "method": "nvmf_set_crdt", 00:20:03.071 "params": { 00:20:03.071 "crdt1": 0, 00:20:03.071 "crdt2": 0, 00:20:03.071 "crdt3": 0 00:20:03.071 } 00:20:03.071 }, 00:20:03.071 { 00:20:03.071 "method": "nvmf_create_transport", 00:20:03.071 "params": { 00:20:03.071 "trtype": "TCP", 00:20:03.071 "max_queue_depth": 128, 00:20:03.071 "max_io_qpairs_per_ctrlr": 127, 00:20:03.071 "in_capsule_data_size": 4096, 00:20:03.071 "max_io_size": 131072, 00:20:03.071 "io_unit_size": 131072, 00:20:03.071 "max_aq_depth": 128, 00:20:03.071 "num_shared_buffers": 511, 00:20:03.071 "buf_cache_size": 4294967295, 00:20:03.071 "dif_insert_or_strip": false, 00:20:03.071 "zcopy": false, 00:20:03.071 "c2h_success": false, 00:20:03.071 "sock_priority": 0, 00:20:03.071 "abort_timeout_sec": 1, 00:20:03.071 "ack_timeout": 0, 00:20:03.071 "data_wr_pool_size": 0 00:20:03.071 } 00:20:03.071 }, 00:20:03.071 { 00:20:03.071 "method": "nvmf_create_subsystem", 00:20:03.071 "params": { 00:20:03.071 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.071 "allow_any_host": false, 00:20:03.071 "serial_number": "SPDK00000000000001", 00:20:03.071 "model_number": "SPDK bdev Controller", 00:20:03.071 "max_namespaces": 10, 00:20:03.071 "min_cntlid": 1, 00:20:03.071 "max_cntlid": 65519, 00:20:03.071 "ana_reporting": false 00:20:03.071 } 00:20:03.071 }, 00:20:03.071 { 00:20:03.071 "method": "nvmf_subsystem_add_host", 00:20:03.071 "params": { 00:20:03.071 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.071 "host": "nqn.2016-06.io.spdk:host1", 00:20:03.071 "psk": "key0" 00:20:03.071 } 00:20:03.071 }, 00:20:03.071 { 00:20:03.071 "method": "nvmf_subsystem_add_ns", 00:20:03.071 "params": { 00:20:03.071 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.071 "namespace": { 00:20:03.071 "nsid": 1, 00:20:03.071 "bdev_name": "malloc0", 00:20:03.071 "nguid": "0554A3D7733647AA9F59AE62B3BE447D", 00:20:03.071 "uuid": "0554a3d7-7336-47aa-9f59-ae62b3be447d", 00:20:03.071 "no_auto_visible": false 00:20:03.071 } 00:20:03.071 } 00:20:03.071 }, 00:20:03.071 { 00:20:03.071 "method": "nvmf_subsystem_add_listener", 00:20:03.071 "params": { 00:20:03.071 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.071 "listen_address": { 00:20:03.071 "trtype": "TCP", 00:20:03.071 "adrfam": "IPv4", 00:20:03.071 "traddr": "10.0.0.2", 00:20:03.071 "trsvcid": "4420" 00:20:03.071 }, 00:20:03.071 "secure_channel": true 00:20:03.071 } 00:20:03.071 } 00:20:03.071 ] 00:20:03.071 } 00:20:03.071 ] 00:20:03.071 }' 00:20:03.071 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:03.332 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:03.332 "subsystems": [ 00:20:03.332 { 00:20:03.332 "subsystem": "keyring", 00:20:03.332 "config": [ 00:20:03.332 { 00:20:03.332 "method": "keyring_file_add_key", 00:20:03.332 "params": { 00:20:03.332 "name": "key0", 00:20:03.332 "path": "/tmp/tmp.GkQqcE62Uk" 00:20:03.332 } 00:20:03.332 } 00:20:03.332 ] 00:20:03.332 }, 00:20:03.332 { 00:20:03.332 "subsystem": "iobuf", 00:20:03.332 "config": [ 00:20:03.332 { 00:20:03.332 "method": "iobuf_set_options", 00:20:03.332 "params": { 00:20:03.332 "small_pool_count": 8192, 00:20:03.332 "large_pool_count": 1024, 00:20:03.332 "small_bufsize": 8192, 00:20:03.332 "large_bufsize": 135168 00:20:03.332 } 00:20:03.332 } 00:20:03.332 ] 00:20:03.332 }, 00:20:03.332 { 00:20:03.332 "subsystem": "sock", 00:20:03.332 "config": [ 00:20:03.332 { 00:20:03.332 "method": "sock_set_default_impl", 00:20:03.332 "params": { 00:20:03.332 "impl_name": "posix" 00:20:03.332 } 00:20:03.332 }, 00:20:03.332 { 00:20:03.332 "method": "sock_impl_set_options", 00:20:03.332 "params": { 00:20:03.332 "impl_name": "ssl", 00:20:03.332 "recv_buf_size": 4096, 00:20:03.332 "send_buf_size": 4096, 00:20:03.332 "enable_recv_pipe": true, 00:20:03.332 "enable_quickack": false, 00:20:03.332 "enable_placement_id": 0, 00:20:03.332 "enable_zerocopy_send_server": true, 00:20:03.332 "enable_zerocopy_send_client": false, 00:20:03.332 "zerocopy_threshold": 0, 00:20:03.332 "tls_version": 0, 00:20:03.332 "enable_ktls": false 00:20:03.332 } 00:20:03.332 }, 00:20:03.332 { 00:20:03.333 "method": "sock_impl_set_options", 00:20:03.333 "params": { 00:20:03.333 "impl_name": "posix", 00:20:03.333 "recv_buf_size": 2097152, 00:20:03.333 "send_buf_size": 2097152, 00:20:03.333 "enable_recv_pipe": true, 00:20:03.333 "enable_quickack": false, 00:20:03.333 "enable_placement_id": 0, 00:20:03.333 "enable_zerocopy_send_server": true, 00:20:03.333 "enable_zerocopy_send_client": false, 00:20:03.333 "zerocopy_threshold": 0, 00:20:03.333 "tls_version": 0, 00:20:03.333 "enable_ktls": false 00:20:03.333 } 00:20:03.333 } 00:20:03.333 ] 00:20:03.333 }, 00:20:03.333 { 00:20:03.333 "subsystem": "vmd", 00:20:03.333 "config": [] 00:20:03.333 }, 00:20:03.333 { 00:20:03.333 "subsystem": "accel", 00:20:03.333 "config": [ 00:20:03.333 { 00:20:03.333 "method": "accel_set_options", 00:20:03.333 "params": { 00:20:03.333 "small_cache_size": 128, 00:20:03.333 "large_cache_size": 16, 00:20:03.333 "task_count": 2048, 00:20:03.333 "sequence_count": 2048, 00:20:03.333 "buf_count": 2048 00:20:03.333 } 00:20:03.333 } 00:20:03.333 ] 00:20:03.333 }, 00:20:03.333 { 00:20:03.333 "subsystem": "bdev", 00:20:03.333 "config": [ 00:20:03.333 { 00:20:03.333 "method": "bdev_set_options", 00:20:03.333 "params": { 00:20:03.333 "bdev_io_pool_size": 65535, 00:20:03.333 "bdev_io_cache_size": 256, 00:20:03.333 "bdev_auto_examine": true, 00:20:03.333 "iobuf_small_cache_size": 128, 00:20:03.333 "iobuf_large_cache_size": 16 00:20:03.333 } 00:20:03.333 }, 00:20:03.333 { 00:20:03.333 "method": "bdev_raid_set_options", 00:20:03.333 "params": { 00:20:03.333 "process_window_size_kb": 1024, 00:20:03.333 "process_max_bandwidth_mb_sec": 0 00:20:03.333 } 00:20:03.333 }, 00:20:03.333 { 00:20:03.333 "method": "bdev_iscsi_set_options", 00:20:03.333 "params": { 00:20:03.333 "timeout_sec": 30 00:20:03.333 } 00:20:03.333 }, 00:20:03.333 { 00:20:03.333 "method": "bdev_nvme_set_options", 00:20:03.333 "params": { 00:20:03.333 "action_on_timeout": "none", 00:20:03.333 "timeout_us": 0, 00:20:03.333 "timeout_admin_us": 0, 00:20:03.333 "keep_alive_timeout_ms": 10000, 00:20:03.333 "arbitration_burst": 0, 00:20:03.333 "low_priority_weight": 0, 00:20:03.333 "medium_priority_weight": 0, 00:20:03.333 "high_priority_weight": 0, 00:20:03.333 "nvme_adminq_poll_period_us": 10000, 00:20:03.333 "nvme_ioq_poll_period_us": 0, 00:20:03.333 "io_queue_requests": 512, 00:20:03.333 "delay_cmd_submit": true, 00:20:03.333 "transport_retry_count": 4, 00:20:03.333 "bdev_retry_count": 3, 00:20:03.333 "transport_ack_timeout": 0, 00:20:03.333 "ctrlr_loss_timeout_sec": 0, 00:20:03.333 "reconnect_delay_sec": 0, 00:20:03.333 "fast_io_fail_timeout_sec": 0, 00:20:03.333 "disable_auto_failback": false, 00:20:03.333 "generate_uuids": false, 00:20:03.333 "transport_tos": 0, 00:20:03.333 "nvme_error_stat": false, 00:20:03.333 "rdma_srq_size": 0, 00:20:03.333 "io_path_stat": false, 00:20:03.333 "allow_accel_sequence": false, 00:20:03.333 "rdma_max_cq_size": 0, 00:20:03.333 "rdma_cm_event_timeout_ms": 0, 00:20:03.333 "dhchap_digests": [ 00:20:03.333 "sha256", 00:20:03.333 "sha384", 00:20:03.333 "sha512" 00:20:03.333 ], 00:20:03.333 "dhchap_dhgroups": [ 00:20:03.333 "null", 00:20:03.333 "ffdhe2048", 00:20:03.333 "ffdhe3072", 00:20:03.333 "ffdhe4096", 00:20:03.333 "ffdhe6144", 00:20:03.333 "ffdhe8192" 00:20:03.333 ] 00:20:03.333 } 00:20:03.333 }, 00:20:03.333 { 00:20:03.333 "method": "bdev_nvme_attach_controller", 00:20:03.333 "params": { 00:20:03.333 "name": "TLSTEST", 00:20:03.333 "trtype": "TCP", 00:20:03.333 "adrfam": "IPv4", 00:20:03.333 "traddr": "10.0.0.2", 00:20:03.333 "trsvcid": "4420", 00:20:03.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.333 "prchk_reftag": false, 00:20:03.333 "prchk_guard": false, 00:20:03.333 "ctrlr_loss_timeout_sec": 0, 00:20:03.333 "reconnect_delay_sec": 0, 00:20:03.333 "fast_io_fail_timeout_sec": 0, 00:20:03.333 "psk": "key0", 00:20:03.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.333 "hdgst": false, 00:20:03.333 "ddgst": false, 00:20:03.333 "multipath": "multipath" 00:20:03.333 } 00:20:03.333 }, 00:20:03.333 { 00:20:03.333 "method": "bdev_nvme_set_hotplug", 00:20:03.333 "params": { 00:20:03.333 "period_us": 100000, 00:20:03.333 "enable": false 00:20:03.333 } 00:20:03.333 }, 00:20:03.333 { 00:20:03.333 "method": "bdev_wait_for_examine" 00:20:03.333 } 00:20:03.333 ] 00:20:03.333 }, 00:20:03.333 { 00:20:03.333 "subsystem": "nbd", 00:20:03.333 "config": [] 00:20:03.333 } 00:20:03.333 ] 00:20:03.333 }' 00:20:03.333 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1666209 00:20:03.333 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1666209 ']' 00:20:03.333 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1666209 00:20:03.333 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:03.333 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.333 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1666209 00:20:03.333 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:03.333 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:03.333 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1666209' 00:20:03.333 killing process with pid 1666209 00:20:03.333 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1666209 00:20:03.333 Received shutdown signal, test time was about 10.000000 seconds 00:20:03.333 00:20:03.333 Latency(us) 00:20:03.333 [2024-11-04T11:24:37.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.333 [2024-11-04T11:24:37.903Z] =================================================================================================================== 00:20:03.333 [2024-11-04T11:24:37.903Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:03.333 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1666209 00:20:03.594 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1666032 00:20:03.594 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1666032 ']' 00:20:03.594 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1666032 00:20:03.594 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:03.594 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.594 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1666032 00:20:03.594 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:03.594 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:03.594 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1666032' 00:20:03.595 killing process with pid 1666032 00:20:03.595 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1666032 00:20:03.595 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1666032 00:20:03.595 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:03.595 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:03.595 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:03.595 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.595 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:03.595 "subsystems": [ 00:20:03.595 { 00:20:03.595 "subsystem": "keyring", 00:20:03.595 "config": [ 00:20:03.595 { 00:20:03.595 "method": "keyring_file_add_key", 00:20:03.595 "params": { 00:20:03.595 "name": "key0", 00:20:03.595 "path": "/tmp/tmp.GkQqcE62Uk" 00:20:03.595 } 00:20:03.595 } 00:20:03.595 ] 00:20:03.595 }, 00:20:03.595 { 00:20:03.595 "subsystem": "iobuf", 00:20:03.595 "config": [ 00:20:03.595 { 00:20:03.595 "method": "iobuf_set_options", 00:20:03.595 "params": { 00:20:03.595 "small_pool_count": 8192, 00:20:03.595 "large_pool_count": 1024, 00:20:03.595 "small_bufsize": 8192, 00:20:03.595 "large_bufsize": 135168 00:20:03.595 } 00:20:03.595 } 00:20:03.595 ] 00:20:03.595 }, 00:20:03.595 { 00:20:03.595 "subsystem": "sock", 00:20:03.595 "config": [ 00:20:03.595 { 00:20:03.595 "method": "sock_set_default_impl", 00:20:03.595 "params": { 00:20:03.595 "impl_name": "posix" 00:20:03.595 } 00:20:03.595 }, 00:20:03.595 { 00:20:03.595 "method": "sock_impl_set_options", 00:20:03.595 "params": { 00:20:03.595 "impl_name": "ssl", 00:20:03.595 "recv_buf_size": 4096, 00:20:03.595 "send_buf_size": 4096, 00:20:03.595 "enable_recv_pipe": true, 00:20:03.595 "enable_quickack": false, 00:20:03.595 "enable_placement_id": 0, 00:20:03.595 "enable_zerocopy_send_server": true, 00:20:03.595 "enable_zerocopy_send_client": false, 00:20:03.595 "zerocopy_threshold": 0, 00:20:03.595 "tls_version": 0, 00:20:03.595 "enable_ktls": false 00:20:03.595 } 00:20:03.595 }, 00:20:03.595 { 00:20:03.595 "method": "sock_impl_set_options", 00:20:03.595 "params": { 00:20:03.595 "impl_name": "posix", 00:20:03.595 "recv_buf_size": 2097152, 00:20:03.595 "send_buf_size": 2097152, 00:20:03.595 "enable_recv_pipe": true, 00:20:03.595 "enable_quickack": false, 00:20:03.595 "enable_placement_id": 0, 00:20:03.595 "enable_zerocopy_send_server": true, 00:20:03.595 "enable_zerocopy_send_client": false, 00:20:03.595 "zerocopy_threshold": 0, 00:20:03.595 "tls_version": 0, 00:20:03.595 "enable_ktls": false 00:20:03.595 } 00:20:03.595 } 00:20:03.595 ] 00:20:03.595 }, 00:20:03.595 { 00:20:03.595 "subsystem": "vmd", 00:20:03.595 "config": [] 00:20:03.595 }, 00:20:03.595 { 00:20:03.595 "subsystem": "accel", 00:20:03.595 "config": [ 00:20:03.595 { 00:20:03.595 "method": "accel_set_options", 00:20:03.595 "params": { 00:20:03.595 "small_cache_size": 128, 00:20:03.595 "large_cache_size": 16, 00:20:03.595 "task_count": 2048, 00:20:03.595 "sequence_count": 2048, 00:20:03.595 "buf_count": 2048 00:20:03.595 } 00:20:03.595 } 00:20:03.595 ] 00:20:03.595 }, 00:20:03.595 { 00:20:03.595 "subsystem": "bdev", 00:20:03.595 "config": [ 00:20:03.595 { 00:20:03.595 "method": "bdev_set_options", 00:20:03.595 "params": { 00:20:03.595 "bdev_io_pool_size": 65535, 00:20:03.595 "bdev_io_cache_size": 256, 00:20:03.595 "bdev_auto_examine": true, 00:20:03.595 "iobuf_small_cache_size": 128, 00:20:03.595 "iobuf_large_cache_size": 16 00:20:03.595 } 00:20:03.595 }, 00:20:03.595 { 00:20:03.595 "method": "bdev_raid_set_options", 00:20:03.595 "params": { 00:20:03.595 "process_window_size_kb": 1024, 00:20:03.595 "process_max_bandwidth_mb_sec": 0 00:20:03.595 } 00:20:03.595 }, 00:20:03.595 { 00:20:03.595 "method": "bdev_iscsi_set_options", 00:20:03.595 "params": { 00:20:03.595 "timeout_sec": 30 00:20:03.595 } 00:20:03.595 }, 00:20:03.595 { 00:20:03.595 "method": "bdev_nvme_set_options", 00:20:03.595 "params": { 00:20:03.595 "action_on_timeout": "none", 00:20:03.595 "timeout_us": 0, 00:20:03.595 "timeout_admin_us": 0, 00:20:03.595 "keep_alive_timeout_ms": 10000, 00:20:03.595 "arbitration_burst": 0, 00:20:03.595 "low_priority_weight": 0, 00:20:03.595 "medium_priority_weight": 0, 00:20:03.595 "high_priority_weight": 0, 00:20:03.595 "nvme_adminq_poll_period_us": 10000, 00:20:03.595 "nvme_ioq_poll_period_us": 0, 00:20:03.595 "io_queue_requests": 0, 00:20:03.595 "delay_cmd_submit": true, 00:20:03.595 "transport_retry_count": 4, 00:20:03.595 "bdev_retry_count": 3, 00:20:03.595 "transport_ack_timeout": 0, 00:20:03.595 "ctrlr_loss_timeout_sec": 0, 00:20:03.595 "reconnect_delay_sec": 0, 00:20:03.595 "fast_io_fail_timeout_sec": 0, 00:20:03.595 "disable_auto_failback": false, 00:20:03.595 "generate_uuids": false, 00:20:03.595 "transport_tos": 0, 00:20:03.595 "nvme_error_stat": false, 00:20:03.595 "rdma_srq_size": 0, 00:20:03.595 "io_path_stat": false, 00:20:03.595 "allow_accel_sequence": false, 00:20:03.595 "rdma_max_cq_size": 0, 00:20:03.595 "rdma_cm_event_timeout_ms": 0, 00:20:03.595 "dhchap_digests": [ 00:20:03.595 "sha256", 00:20:03.595 "sha384", 00:20:03.595 "sha512" 00:20:03.595 ], 00:20:03.595 "dhchap_dhgroups": [ 00:20:03.595 "null", 00:20:03.595 "ffdhe2048", 00:20:03.595 "ffdhe3072", 00:20:03.595 "ffdhe4096", 00:20:03.595 "ffdhe6144", 00:20:03.595 "ffdhe8192" 00:20:03.595 ] 00:20:03.595 } 00:20:03.595 }, 00:20:03.595 { 00:20:03.595 "method": "bdev_nvme_set_hotplug", 00:20:03.595 "params": { 00:20:03.595 "period_us": 100000, 00:20:03.595 "enable": false 00:20:03.595 } 00:20:03.595 }, 00:20:03.595 { 00:20:03.595 "method": "bdev_malloc_create", 00:20:03.595 "params": { 00:20:03.595 "name": "malloc0", 00:20:03.595 "num_blocks": 8192, 00:20:03.595 "block_size": 4096, 00:20:03.595 "physical_block_size": 4096, 00:20:03.595 "uuid": "0554a3d7-7336-47aa-9f59-ae62b3be447d", 00:20:03.595 "optimal_io_boundary": 0, 00:20:03.595 "md_size": 0, 00:20:03.595 "dif_type": 0, 00:20:03.595 "dif_is_head_of_md": false, 00:20:03.595 "dif_pi_format": 0 00:20:03.595 } 00:20:03.595 }, 00:20:03.595 { 00:20:03.595 "method": "bdev_wait_for_examine" 00:20:03.595 } 00:20:03.595 ] 00:20:03.595 }, 00:20:03.595 { 00:20:03.595 "subsystem": "nbd", 00:20:03.595 "config": [] 00:20:03.595 }, 00:20:03.595 { 00:20:03.595 "subsystem": "scheduler", 00:20:03.595 "config": [ 00:20:03.595 { 00:20:03.595 "method": "framework_set_scheduler", 00:20:03.595 "params": { 00:20:03.595 "name": "static" 00:20:03.595 } 00:20:03.595 } 00:20:03.595 ] 00:20:03.595 }, 00:20:03.595 { 00:20:03.595 "subsystem": "nvmf", 00:20:03.595 "config": [ 00:20:03.595 { 00:20:03.595 "method": "nvmf_set_config", 00:20:03.595 "params": { 00:20:03.595 "discovery_filter": "match_any", 00:20:03.595 "admin_cmd_passthru": { 00:20:03.595 "identify_ctrlr": false 00:20:03.595 }, 00:20:03.595 "dhchap_digests": [ 00:20:03.595 "sha256", 00:20:03.595 "sha384", 00:20:03.595 "sha512" 00:20:03.595 ], 00:20:03.595 "dhchap_dhgroups": [ 00:20:03.595 "null", 00:20:03.595 "ffdhe2048", 00:20:03.595 "ffdhe3072", 00:20:03.595 "ffdhe4096", 00:20:03.595 "ffdhe6144", 00:20:03.595 "ffdhe8192" 00:20:03.595 ] 00:20:03.595 } 00:20:03.595 }, 00:20:03.595 { 00:20:03.595 "method": "nvmf_set_max_subsystems", 00:20:03.595 "params": { 00:20:03.596 "max_subsystems": 1024 00:20:03.596 } 00:20:03.596 }, 00:20:03.596 { 00:20:03.596 "method": "nvmf_set_crdt", 00:20:03.596 "params": { 00:20:03.596 "crdt1": 0, 00:20:03.596 "crdt2": 0, 00:20:03.596 "crdt3": 0 00:20:03.596 } 00:20:03.596 }, 00:20:03.596 { 00:20:03.596 "method": "nvmf_create_transport", 00:20:03.596 "params": { 00:20:03.596 "trtype": "TCP", 00:20:03.596 "max_queue_depth": 128, 00:20:03.596 "max_io_qpairs_per_ctrlr": 127, 00:20:03.596 "in_capsule_data_size": 4096, 00:20:03.596 "max_io_size": 131072, 00:20:03.596 "io_unit_size": 131072, 00:20:03.596 "max_aq_depth": 128, 00:20:03.596 "num_shared_buffers": 511, 00:20:03.596 "buf_cache_size": 4294967295, 00:20:03.596 "dif_insert_or_strip": false, 00:20:03.596 "zcopy": false, 00:20:03.596 "c2h_success": false, 00:20:03.596 "sock_priority": 0, 00:20:03.596 "abort_timeout_sec": 1, 00:20:03.596 "ack_timeout": 0, 00:20:03.596 "data_wr_pool_size": 0 00:20:03.596 } 00:20:03.596 }, 00:20:03.596 { 00:20:03.596 "method": "nvmf_create_subsystem", 00:20:03.596 "params": { 00:20:03.596 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.596 "allow_any_host": false, 00:20:03.596 "serial_number": "SPDK00000000000001", 00:20:03.596 "model_number": "SPDK bdev Controller", 00:20:03.596 "max_namespaces": 10, 00:20:03.596 "min_cntlid": 1, 00:20:03.596 "max_cntlid": 65519, 00:20:03.596 "ana_reporting": false 00:20:03.596 } 00:20:03.596 }, 00:20:03.596 { 00:20:03.596 "method": "nvmf_subsystem_add_host", 00:20:03.596 "params": { 00:20:03.596 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.596 "host": "nqn.2016-06.io.spdk:host1", 00:20:03.596 "psk": "key0" 00:20:03.596 } 00:20:03.596 }, 00:20:03.596 { 00:20:03.596 "method": "nvmf_subsystem_add_ns", 00:20:03.596 "params": { 00:20:03.596 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.596 "namespace": { 00:20:03.596 "nsid": 1, 00:20:03.596 "bdev_name": "malloc0", 00:20:03.596 "nguid": "0554A3D7733647AA9F59AE62B3BE447D", 00:20:03.596 "uuid": "0554a3d7-7336-47aa-9f59-ae62b3be447d", 00:20:03.596 "no_auto_visible": false 00:20:03.596 } 00:20:03.596 } 00:20:03.596 }, 00:20:03.596 { 00:20:03.596 "method": "nvmf_subsystem_add_listener", 00:20:03.596 "params": { 00:20:03.596 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.596 "listen_address": { 00:20:03.596 "trtype": "TCP", 00:20:03.596 "adrfam": "IPv4", 00:20:03.596 "traddr": "10.0.0.2", 00:20:03.596 "trsvcid": "4420" 00:20:03.596 }, 00:20:03.596 "secure_channel": true 00:20:03.596 } 00:20:03.596 } 00:20:03.596 ] 00:20:03.596 } 00:20:03.596 ] 00:20:03.596 }' 00:20:03.596 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1666530 00:20:03.596 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1666530 00:20:03.596 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:03.596 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1666530 ']' 00:20:03.596 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.596 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:03.596 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.596 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:03.596 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.856 [2024-11-04 12:24:38.195804] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:20:03.856 [2024-11-04 12:24:38.195863] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.856 [2024-11-04 12:24:38.278844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.856 [2024-11-04 12:24:38.308355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.856 [2024-11-04 12:24:38.308385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.856 [2024-11-04 12:24:38.308391] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.856 [2024-11-04 12:24:38.308396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.856 [2024-11-04 12:24:38.308400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.856 [2024-11-04 12:24:38.308908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.117 [2024-11-04 12:24:38.501319] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.117 [2024-11-04 12:24:38.533343] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:04.117 [2024-11-04 12:24:38.533533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.690 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:04.690 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:04.690 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:04.690 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:04.690 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.690 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.690 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1666874 00:20:04.690 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1666874 /var/tmp/bdevperf.sock 00:20:04.690 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1666874 ']' 00:20:04.690 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.690 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:04.690 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.690 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:04.690 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:04.690 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.690 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:04.690 "subsystems": [ 00:20:04.690 { 00:20:04.690 "subsystem": "keyring", 00:20:04.690 "config": [ 00:20:04.690 { 00:20:04.690 "method": "keyring_file_add_key", 00:20:04.690 "params": { 00:20:04.690 "name": "key0", 00:20:04.690 "path": "/tmp/tmp.GkQqcE62Uk" 00:20:04.690 } 00:20:04.690 } 00:20:04.690 ] 00:20:04.690 }, 00:20:04.690 { 00:20:04.690 "subsystem": "iobuf", 00:20:04.690 "config": [ 00:20:04.690 { 00:20:04.690 "method": "iobuf_set_options", 00:20:04.690 "params": { 00:20:04.690 "small_pool_count": 8192, 00:20:04.690 "large_pool_count": 1024, 00:20:04.690 "small_bufsize": 8192, 00:20:04.690 "large_bufsize": 135168 00:20:04.690 } 00:20:04.690 } 00:20:04.690 ] 00:20:04.690 }, 00:20:04.690 { 00:20:04.690 "subsystem": "sock", 00:20:04.690 "config": [ 00:20:04.690 { 00:20:04.690 "method": "sock_set_default_impl", 00:20:04.690 "params": { 00:20:04.690 "impl_name": "posix" 00:20:04.690 } 00:20:04.690 }, 00:20:04.690 { 00:20:04.690 "method": "sock_impl_set_options", 00:20:04.690 "params": { 00:20:04.690 "impl_name": "ssl", 00:20:04.690 "recv_buf_size": 4096, 00:20:04.690 "send_buf_size": 4096, 00:20:04.690 "enable_recv_pipe": true, 00:20:04.690 "enable_quickack": false, 00:20:04.690 "enable_placement_id": 0, 00:20:04.690 "enable_zerocopy_send_server": true, 00:20:04.690 "enable_zerocopy_send_client": false, 00:20:04.690 "zerocopy_threshold": 0, 00:20:04.690 "tls_version": 0, 00:20:04.690 "enable_ktls": false 00:20:04.690 } 00:20:04.690 }, 00:20:04.690 { 00:20:04.690 "method": "sock_impl_set_options", 00:20:04.690 "params": { 00:20:04.690 "impl_name": "posix", 00:20:04.690 "recv_buf_size": 2097152, 00:20:04.690 "send_buf_size": 2097152, 00:20:04.690 "enable_recv_pipe": true, 00:20:04.690 "enable_quickack": false, 00:20:04.690 "enable_placement_id": 0, 00:20:04.690 "enable_zerocopy_send_server": true, 00:20:04.690 "enable_zerocopy_send_client": false, 00:20:04.690 "zerocopy_threshold": 0, 00:20:04.690 "tls_version": 0, 00:20:04.690 "enable_ktls": false 00:20:04.690 } 00:20:04.690 } 00:20:04.690 ] 00:20:04.690 }, 00:20:04.690 { 00:20:04.690 "subsystem": "vmd", 00:20:04.690 "config": [] 00:20:04.690 }, 00:20:04.690 { 00:20:04.690 "subsystem": "accel", 00:20:04.690 "config": [ 00:20:04.690 { 00:20:04.690 "method": "accel_set_options", 00:20:04.690 "params": { 00:20:04.690 "small_cache_size": 128, 00:20:04.690 "large_cache_size": 16, 00:20:04.690 "task_count": 2048, 00:20:04.690 "sequence_count": 2048, 00:20:04.690 "buf_count": 2048 00:20:04.690 } 00:20:04.690 } 00:20:04.690 ] 00:20:04.690 }, 00:20:04.690 { 00:20:04.690 "subsystem": "bdev", 00:20:04.690 "config": [ 00:20:04.690 { 00:20:04.690 "method": "bdev_set_options", 00:20:04.690 "params": { 00:20:04.690 "bdev_io_pool_size": 65535, 00:20:04.690 "bdev_io_cache_size": 256, 00:20:04.690 "bdev_auto_examine": true, 00:20:04.690 "iobuf_small_cache_size": 128, 00:20:04.690 "iobuf_large_cache_size": 16 00:20:04.690 } 00:20:04.690 }, 00:20:04.690 { 00:20:04.690 "method": "bdev_raid_set_options", 00:20:04.690 "params": { 00:20:04.690 "process_window_size_kb": 1024, 00:20:04.690 "process_max_bandwidth_mb_sec": 0 00:20:04.690 } 00:20:04.690 }, 00:20:04.690 { 00:20:04.690 "method": "bdev_iscsi_set_options", 00:20:04.690 "params": { 00:20:04.690 "timeout_sec": 30 00:20:04.690 } 00:20:04.690 }, 00:20:04.690 { 00:20:04.690 "method": "bdev_nvme_set_options", 00:20:04.690 "params": { 00:20:04.690 "action_on_timeout": "none", 00:20:04.690 "timeout_us": 0, 00:20:04.690 "timeout_admin_us": 0, 00:20:04.690 "keep_alive_timeout_ms": 10000, 00:20:04.690 "arbitration_burst": 0, 00:20:04.690 "low_priority_weight": 0, 00:20:04.690 "medium_priority_weight": 0, 00:20:04.691 "high_priority_weight": 0, 00:20:04.691 "nvme_adminq_poll_period_us": 10000, 00:20:04.691 "nvme_ioq_poll_period_us": 0, 00:20:04.691 "io_queue_requests": 512, 00:20:04.691 "delay_cmd_submit": true, 00:20:04.691 "transport_retry_count": 4, 00:20:04.691 "bdev_retry_count": 3, 00:20:04.691 "transport_ack_timeout": 0, 00:20:04.691 "ctrlr_loss_timeout_sec": 0, 00:20:04.691 "reconnect_delay_sec": 0, 00:20:04.691 "fast_io_fail_timeout_sec": 0, 00:20:04.691 "disable_auto_failback": false, 00:20:04.691 "generate_uuids": false, 00:20:04.691 "transport_tos": 0, 00:20:04.691 "nvme_error_stat": false, 00:20:04.691 "rdma_srq_size": 0, 00:20:04.691 "io_path_stat": false, 00:20:04.691 "allow_accel_sequence": false, 00:20:04.691 "rdma_max_cq_size": 0, 00:20:04.691 "rdma_cm_event_timeout_ms": 0, 00:20:04.691 "dhchap_digests": [ 00:20:04.691 "sha256", 00:20:04.691 "sha384", 00:20:04.691 "sha512" 00:20:04.691 ], 00:20:04.691 "dhchap_dhgroups": [ 00:20:04.691 "null", 00:20:04.691 "ffdhe2048", 00:20:04.691 "ffdhe3072", 00:20:04.691 "ffdhe4096", 00:20:04.691 "ffdhe6144", 00:20:04.691 "ffdhe8192" 00:20:04.691 ] 00:20:04.691 } 00:20:04.691 }, 00:20:04.691 { 00:20:04.691 "method": "bdev_nvme_attach_controller", 00:20:04.691 "params": { 00:20:04.691 "name": "TLSTEST", 00:20:04.691 "trtype": "TCP", 00:20:04.691 "adrfam": "IPv4", 00:20:04.691 "traddr": "10.0.0.2", 00:20:04.691 "trsvcid": "4420", 00:20:04.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.691 "prchk_reftag": false, 00:20:04.691 "prchk_guard": false, 00:20:04.691 "ctrlr_loss_timeout_sec": 0, 00:20:04.691 "reconnect_delay_sec": 0, 00:20:04.691 "fast_io_fail_timeout_sec": 0, 00:20:04.691 "psk": "key0", 00:20:04.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:04.691 "hdgst": false, 00:20:04.691 "ddgst": false, 00:20:04.691 "multipath": "multipath" 00:20:04.691 } 00:20:04.691 }, 00:20:04.691 { 00:20:04.691 "method": "bdev_nvme_set_hotplug", 00:20:04.691 "params": { 00:20:04.691 "period_us": 100000, 00:20:04.691 "enable": false 00:20:04.691 } 00:20:04.691 }, 00:20:04.691 { 00:20:04.691 "method": "bdev_wait_for_examine" 00:20:04.691 } 00:20:04.691 ] 00:20:04.691 }, 00:20:04.691 { 00:20:04.691 "subsystem": "nbd", 00:20:04.691 "config": [] 00:20:04.691 } 00:20:04.691 ] 00:20:04.691 }' 00:20:04.691 [2024-11-04 12:24:39.072159] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:20:04.691 [2024-11-04 12:24:39.072210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666874 ] 00:20:04.691 [2024-11-04 12:24:39.121980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.691 [2024-11-04 12:24:39.151084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.951 [2024-11-04 12:24:39.284427] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.524 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:05.524 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:05.524 12:24:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:05.524 Running I/O for 10 seconds... 00:20:07.414 4452.00 IOPS, 17.39 MiB/s [2024-11-04T11:24:43.367Z] 4865.50 IOPS, 19.01 MiB/s [2024-11-04T11:24:44.308Z] 5112.67 IOPS, 19.97 MiB/s [2024-11-04T11:24:45.249Z] 5355.25 IOPS, 20.92 MiB/s [2024-11-04T11:24:46.190Z] 5405.80 IOPS, 21.12 MiB/s [2024-11-04T11:24:47.130Z] 5360.17 IOPS, 20.94 MiB/s [2024-11-04T11:24:48.072Z] 5377.86 IOPS, 21.01 MiB/s [2024-11-04T11:24:49.014Z] 5457.38 IOPS, 21.32 MiB/s [2024-11-04T11:24:50.399Z] 5529.89 IOPS, 21.60 MiB/s [2024-11-04T11:24:50.399Z] 5509.20 IOPS, 21.52 MiB/s 00:20:15.829 Latency(us) 00:20:15.829 [2024-11-04T11:24:50.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.829 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:15.829 Verification LBA range: start 0x0 length 0x2000 00:20:15.829 TLSTESTn1 : 10.02 5511.76 21.53 0.00 0.00 23188.45 5679.79 82575.36 00:20:15.829 [2024-11-04T11:24:50.399Z] =================================================================================================================== 00:20:15.829 [2024-11-04T11:24:50.399Z] Total : 5511.76 21.53 0.00 0.00 23188.45 5679.79 82575.36 00:20:15.829 { 00:20:15.829 "results": [ 00:20:15.829 { 00:20:15.829 "job": "TLSTESTn1", 00:20:15.829 "core_mask": "0x4", 00:20:15.829 "workload": "verify", 00:20:15.829 "status": "finished", 00:20:15.829 "verify_range": { 00:20:15.829 "start": 0, 00:20:15.829 "length": 8192 00:20:15.829 }, 00:20:15.829 "queue_depth": 128, 00:20:15.829 "io_size": 4096, 00:20:15.829 "runtime": 10.018395, 00:20:15.829 "iops": 5511.761115428169, 00:20:15.829 "mibps": 21.530316857141287, 00:20:15.829 "io_failed": 0, 00:20:15.829 "io_timeout": 0, 00:20:15.829 "avg_latency_us": 23188.449556372503, 00:20:15.829 "min_latency_us": 5679.786666666667, 00:20:15.829 "max_latency_us": 82575.36 00:20:15.829 } 00:20:15.829 ], 00:20:15.829 "core_count": 1 00:20:15.829 } 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1666874 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1666874 ']' 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1666874 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1666874 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1666874' 00:20:15.829 killing process with pid 1666874 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1666874 00:20:15.829 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.829 00:20:15.829 Latency(us) 00:20:15.829 [2024-11-04T11:24:50.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.829 [2024-11-04T11:24:50.399Z] =================================================================================================================== 00:20:15.829 [2024-11-04T11:24:50.399Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1666874 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1666530 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1666530 ']' 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1666530 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1666530 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1666530' 00:20:15.829 killing process with pid 1666530 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1666530 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1666530 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1668899 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1668899 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1668899 ']' 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:15.829 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.090 [2024-11-04 12:24:50.420694] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:20:16.090 [2024-11-04 12:24:50.420758] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.090 [2024-11-04 12:24:50.485875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.090 [2024-11-04 12:24:50.520566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.090 [2024-11-04 12:24:50.520601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.090 [2024-11-04 12:24:50.520609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.090 [2024-11-04 12:24:50.520616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.090 [2024-11-04 12:24:50.520621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.090 [2024-11-04 12:24:50.521202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.090 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:16.090 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:16.090 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:16.090 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:16.090 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.090 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.090 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.GkQqcE62Uk 00:20:16.090 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.GkQqcE62Uk 00:20:16.090 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:16.351 [2024-11-04 12:24:50.808376] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.351 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:16.612 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:16.612 [2024-11-04 12:24:51.129188] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:16.612 [2024-11-04 12:24:51.129417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.612 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:16.873 malloc0 00:20:16.873 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:17.134 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.GkQqcE62Uk 00:20:17.134 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:17.395 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1669264 00:20:17.395 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:17.395 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:17.395 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1669264 /var/tmp/bdevperf.sock 00:20:17.395 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1669264 ']' 00:20:17.395 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.395 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:17.395 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.395 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:17.395 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.395 [2024-11-04 12:24:51.902566] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:20:17.395 [2024-11-04 12:24:51.902619] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1669264 ] 00:20:17.655 [2024-11-04 12:24:51.979134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.655 [2024-11-04 12:24:52.008529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.226 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:18.226 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:18.227 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GkQqcE62Uk 00:20:18.487 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:18.487 [2024-11-04 12:24:52.979728] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.487 nvme0n1 00:20:18.749 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:18.749 Running I/O for 1 seconds... 00:20:19.689 5222.00 IOPS, 20.40 MiB/s 00:20:19.689 Latency(us) 00:20:19.689 [2024-11-04T11:24:54.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.689 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:19.689 Verification LBA range: start 0x0 length 0x2000 00:20:19.689 nvme0n1 : 1.03 5188.56 20.27 0.00 0.00 24444.49 4778.67 41506.13 00:20:19.689 [2024-11-04T11:24:54.259Z] =================================================================================================================== 00:20:19.689 [2024-11-04T11:24:54.259Z] Total : 5188.56 20.27 0.00 0.00 24444.49 4778.67 41506.13 00:20:19.689 { 00:20:19.689 "results": [ 00:20:19.689 { 00:20:19.689 "job": "nvme0n1", 00:20:19.689 "core_mask": "0x2", 00:20:19.689 "workload": "verify", 00:20:19.689 "status": "finished", 00:20:19.689 "verify_range": { 00:20:19.689 "start": 0, 00:20:19.689 "length": 8192 00:20:19.689 }, 00:20:19.689 "queue_depth": 128, 00:20:19.689 "io_size": 4096, 00:20:19.689 "runtime": 1.031308, 00:20:19.689 "iops": 5188.556667843166, 00:20:19.689 "mibps": 20.267799483762367, 00:20:19.689 "io_failed": 0, 00:20:19.689 "io_timeout": 0, 00:20:19.689 "avg_latency_us": 24444.49066467327, 00:20:19.689 "min_latency_us": 4778.666666666667, 00:20:19.689 "max_latency_us": 41506.13333333333 00:20:19.689 } 00:20:19.689 ], 00:20:19.689 "core_count": 1 00:20:19.689 } 00:20:19.689 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1669264 00:20:19.689 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1669264 ']' 00:20:19.689 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1669264 00:20:19.689 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:19.689 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:19.689 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1669264 00:20:19.950 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:19.950 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:19.950 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1669264' 00:20:19.950 killing process with pid 1669264 00:20:19.950 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1669264 00:20:19.950 Received shutdown signal, test time was about 1.000000 seconds 00:20:19.950 00:20:19.950 Latency(us) 00:20:19.950 [2024-11-04T11:24:54.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.950 [2024-11-04T11:24:54.520Z] =================================================================================================================== 00:20:19.950 [2024-11-04T11:24:54.520Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.950 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1669264 00:20:19.950 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1668899 00:20:19.950 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1668899 ']' 00:20:19.950 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1668899 00:20:19.950 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:19.950 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:19.950 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1668899 00:20:19.950 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:19.950 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:19.950 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1668899' 00:20:19.950 killing process with pid 1668899 00:20:19.950 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1668899 00:20:19.950 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1668899 00:20:20.211 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:20.212 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:20.212 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:20.212 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.212 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1669912 00:20:20.212 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1669912 00:20:20.212 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:20.212 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1669912 ']' 00:20:20.212 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.212 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:20.212 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.212 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:20.212 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.212 [2024-11-04 12:24:54.644211] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:20:20.212 [2024-11-04 12:24:54.644273] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.212 [2024-11-04 12:24:54.709892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.212 [2024-11-04 12:24:54.744962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.212 [2024-11-04 12:24:54.744997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.212 [2024-11-04 12:24:54.745005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.212 [2024-11-04 12:24:54.745011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.212 [2024-11-04 12:24:54.745017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.212 [2024-11-04 12:24:54.745581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.153 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:21.153 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:21.153 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:21.153 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:21.153 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.153 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.153 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:21.153 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.153 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.153 [2024-11-04 12:24:55.465386] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.153 malloc0 00:20:21.153 [2024-11-04 12:24:55.492074] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.153 [2024-11-04 12:24:55.492288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.153 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.153 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1669966 00:20:21.153 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1669966 /var/tmp/bdevperf.sock 00:20:21.153 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:21.153 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1669966 ']' 00:20:21.153 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.153 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:21.153 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.153 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:21.153 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.153 [2024-11-04 12:24:55.572439] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:20:21.153 [2024-11-04 12:24:55.572487] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1669966 ] 00:20:21.153 [2024-11-04 12:24:55.648084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.153 [2024-11-04 12:24:55.677848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.093 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:22.093 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:22.093 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GkQqcE62Uk 00:20:22.093 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:22.353 [2024-11-04 12:24:56.677023] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:22.353 nvme0n1 00:20:22.353 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:22.353 Running I/O for 1 seconds... 00:20:23.554 4709.00 IOPS, 18.39 MiB/s 00:20:23.554 Latency(us) 00:20:23.554 [2024-11-04T11:24:58.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.554 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:23.555 Verification LBA range: start 0x0 length 0x2000 00:20:23.555 nvme0n1 : 1.02 4747.36 18.54 0.00 0.00 26769.79 6471.68 33860.27 00:20:23.555 [2024-11-04T11:24:58.125Z] =================================================================================================================== 00:20:23.555 [2024-11-04T11:24:58.125Z] Total : 4747.36 18.54 0.00 0.00 26769.79 6471.68 33860.27 00:20:23.555 { 00:20:23.555 "results": [ 00:20:23.555 { 00:20:23.555 "job": "nvme0n1", 00:20:23.555 "core_mask": "0x2", 00:20:23.555 "workload": "verify", 00:20:23.555 "status": "finished", 00:20:23.555 "verify_range": { 00:20:23.555 "start": 0, 00:20:23.555 "length": 8192 00:20:23.555 }, 00:20:23.555 "queue_depth": 128, 00:20:23.555 "io_size": 4096, 00:20:23.555 "runtime": 1.018883, 00:20:23.555 "iops": 4747.355682644622, 00:20:23.555 "mibps": 18.544358135330555, 00:20:23.555 "io_failed": 0, 00:20:23.555 "io_timeout": 0, 00:20:23.555 "avg_latency_us": 26769.79174419406, 00:20:23.555 "min_latency_us": 6471.68, 00:20:23.555 "max_latency_us": 33860.26666666667 00:20:23.555 } 00:20:23.555 ], 00:20:23.555 "core_count": 1 00:20:23.555 } 00:20:23.555 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:23.555 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.555 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.555 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.555 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:23.555 "subsystems": [ 00:20:23.555 { 00:20:23.555 "subsystem": "keyring", 00:20:23.555 "config": [ 00:20:23.555 { 00:20:23.555 "method": "keyring_file_add_key", 00:20:23.555 "params": { 00:20:23.555 "name": "key0", 00:20:23.555 "path": "/tmp/tmp.GkQqcE62Uk" 00:20:23.555 } 00:20:23.555 } 00:20:23.555 ] 00:20:23.555 }, 00:20:23.555 { 00:20:23.555 "subsystem": "iobuf", 00:20:23.555 "config": [ 00:20:23.555 { 00:20:23.555 "method": "iobuf_set_options", 00:20:23.555 "params": { 00:20:23.555 "small_pool_count": 8192, 00:20:23.555 "large_pool_count": 1024, 00:20:23.555 "small_bufsize": 8192, 00:20:23.555 "large_bufsize": 135168 00:20:23.555 } 00:20:23.555 } 00:20:23.555 ] 00:20:23.555 }, 00:20:23.555 { 00:20:23.555 "subsystem": "sock", 00:20:23.555 "config": [ 00:20:23.555 { 00:20:23.555 "method": "sock_set_default_impl", 00:20:23.555 "params": { 00:20:23.555 "impl_name": "posix" 00:20:23.555 } 00:20:23.555 }, 00:20:23.555 { 00:20:23.555 "method": "sock_impl_set_options", 00:20:23.555 "params": { 00:20:23.555 "impl_name": "ssl", 00:20:23.555 "recv_buf_size": 4096, 00:20:23.555 "send_buf_size": 4096, 00:20:23.555 "enable_recv_pipe": true, 00:20:23.555 "enable_quickack": false, 00:20:23.555 "enable_placement_id": 0, 00:20:23.555 "enable_zerocopy_send_server": true, 00:20:23.555 "enable_zerocopy_send_client": false, 00:20:23.555 "zerocopy_threshold": 0, 00:20:23.555 "tls_version": 0, 00:20:23.555 "enable_ktls": false 00:20:23.555 } 00:20:23.555 }, 00:20:23.555 { 00:20:23.555 "method": "sock_impl_set_options", 00:20:23.555 "params": { 00:20:23.555 "impl_name": "posix", 00:20:23.555 "recv_buf_size": 2097152, 00:20:23.555 "send_buf_size": 2097152, 00:20:23.555 "enable_recv_pipe": true, 00:20:23.555 "enable_quickack": false, 00:20:23.555 "enable_placement_id": 0, 00:20:23.555 "enable_zerocopy_send_server": true, 00:20:23.555 "enable_zerocopy_send_client": false, 00:20:23.555 "zerocopy_threshold": 0, 00:20:23.555 "tls_version": 0, 00:20:23.555 "enable_ktls": false 00:20:23.555 } 00:20:23.555 } 00:20:23.555 ] 00:20:23.555 }, 00:20:23.555 { 00:20:23.555 "subsystem": "vmd", 00:20:23.555 "config": [] 00:20:23.555 }, 00:20:23.555 { 00:20:23.555 "subsystem": "accel", 00:20:23.555 "config": [ 00:20:23.555 { 00:20:23.555 "method": "accel_set_options", 00:20:23.555 "params": { 00:20:23.555 "small_cache_size": 128, 00:20:23.555 "large_cache_size": 16, 00:20:23.555 "task_count": 2048, 00:20:23.555 "sequence_count": 2048, 00:20:23.555 "buf_count": 2048 00:20:23.555 } 00:20:23.555 } 00:20:23.555 ] 00:20:23.555 }, 00:20:23.555 { 00:20:23.555 "subsystem": "bdev", 00:20:23.555 "config": [ 00:20:23.555 { 00:20:23.555 "method": "bdev_set_options", 00:20:23.555 "params": { 00:20:23.555 "bdev_io_pool_size": 65535, 00:20:23.555 "bdev_io_cache_size": 256, 00:20:23.555 "bdev_auto_examine": true, 00:20:23.555 "iobuf_small_cache_size": 128, 00:20:23.555 "iobuf_large_cache_size": 16 00:20:23.555 } 00:20:23.555 }, 00:20:23.555 { 00:20:23.555 "method": "bdev_raid_set_options", 00:20:23.555 "params": { 00:20:23.555 "process_window_size_kb": 1024, 00:20:23.555 "process_max_bandwidth_mb_sec": 0 00:20:23.555 } 00:20:23.555 }, 00:20:23.555 { 00:20:23.555 "method": "bdev_iscsi_set_options", 00:20:23.555 "params": { 00:20:23.555 "timeout_sec": 30 00:20:23.555 } 00:20:23.555 }, 00:20:23.555 { 00:20:23.555 "method": "bdev_nvme_set_options", 00:20:23.555 "params": { 00:20:23.555 "action_on_timeout": "none", 00:20:23.555 "timeout_us": 0, 00:20:23.555 "timeout_admin_us": 0, 00:20:23.555 "keep_alive_timeout_ms": 10000, 00:20:23.555 "arbitration_burst": 0, 00:20:23.555 "low_priority_weight": 0, 00:20:23.555 "medium_priority_weight": 0, 00:20:23.555 "high_priority_weight": 0, 00:20:23.555 "nvme_adminq_poll_period_us": 10000, 00:20:23.555 "nvme_ioq_poll_period_us": 0, 00:20:23.555 "io_queue_requests": 0, 00:20:23.555 "delay_cmd_submit": true, 00:20:23.555 "transport_retry_count": 4, 00:20:23.555 "bdev_retry_count": 3, 00:20:23.555 "transport_ack_timeout": 0, 00:20:23.555 "ctrlr_loss_timeout_sec": 0, 00:20:23.555 "reconnect_delay_sec": 0, 00:20:23.555 "fast_io_fail_timeout_sec": 0, 00:20:23.555 "disable_auto_failback": false, 00:20:23.555 "generate_uuids": false, 00:20:23.555 "transport_tos": 0, 00:20:23.555 "nvme_error_stat": false, 00:20:23.555 "rdma_srq_size": 0, 00:20:23.555 "io_path_stat": false, 00:20:23.555 "allow_accel_sequence": false, 00:20:23.555 "rdma_max_cq_size": 0, 00:20:23.555 "rdma_cm_event_timeout_ms": 0, 00:20:23.555 "dhchap_digests": [ 00:20:23.555 "sha256", 00:20:23.555 "sha384", 00:20:23.555 "sha512" 00:20:23.555 ], 00:20:23.555 "dhchap_dhgroups": [ 00:20:23.555 "null", 00:20:23.555 "ffdhe2048", 00:20:23.555 "ffdhe3072", 00:20:23.555 "ffdhe4096", 00:20:23.555 "ffdhe6144", 00:20:23.555 "ffdhe8192" 00:20:23.555 ] 00:20:23.555 } 00:20:23.555 }, 00:20:23.555 { 00:20:23.555 "method": "bdev_nvme_set_hotplug", 00:20:23.555 "params": { 00:20:23.555 "period_us": 100000, 00:20:23.555 "enable": false 00:20:23.555 } 00:20:23.555 }, 00:20:23.555 { 00:20:23.555 "method": "bdev_malloc_create", 00:20:23.555 "params": { 00:20:23.555 "name": "malloc0", 00:20:23.555 "num_blocks": 8192, 00:20:23.555 "block_size": 4096, 00:20:23.555 "physical_block_size": 4096, 00:20:23.556 "uuid": "c8c592b7-f0da-458c-9160-bb0aefcb5030", 00:20:23.556 "optimal_io_boundary": 0, 00:20:23.556 "md_size": 0, 00:20:23.556 "dif_type": 0, 00:20:23.556 "dif_is_head_of_md": false, 00:20:23.556 "dif_pi_format": 0 00:20:23.556 } 00:20:23.556 }, 00:20:23.556 { 00:20:23.556 "method": "bdev_wait_for_examine" 00:20:23.556 } 00:20:23.556 ] 00:20:23.556 }, 00:20:23.556 { 00:20:23.556 "subsystem": "nbd", 00:20:23.556 "config": [] 00:20:23.556 }, 00:20:23.556 { 00:20:23.556 "subsystem": "scheduler", 00:20:23.556 "config": [ 00:20:23.556 { 00:20:23.556 "method": "framework_set_scheduler", 00:20:23.556 "params": { 00:20:23.556 "name": "static" 00:20:23.556 } 00:20:23.556 } 00:20:23.556 ] 00:20:23.556 }, 00:20:23.556 { 00:20:23.556 "subsystem": "nvmf", 00:20:23.556 "config": [ 00:20:23.556 { 00:20:23.556 "method": "nvmf_set_config", 00:20:23.556 "params": { 00:20:23.556 "discovery_filter": "match_any", 00:20:23.556 "admin_cmd_passthru": { 00:20:23.556 "identify_ctrlr": false 00:20:23.556 }, 00:20:23.556 "dhchap_digests": [ 00:20:23.556 "sha256", 00:20:23.556 "sha384", 00:20:23.556 "sha512" 00:20:23.556 ], 00:20:23.556 "dhchap_dhgroups": [ 00:20:23.556 "null", 00:20:23.556 "ffdhe2048", 00:20:23.556 "ffdhe3072", 00:20:23.556 "ffdhe4096", 00:20:23.556 "ffdhe6144", 00:20:23.556 "ffdhe8192" 00:20:23.556 ] 00:20:23.556 } 00:20:23.556 }, 00:20:23.556 { 00:20:23.556 "method": "nvmf_set_max_subsystems", 00:20:23.556 "params": { 00:20:23.556 "max_subsystems": 1024 00:20:23.556 } 00:20:23.556 }, 00:20:23.556 { 00:20:23.556 "method": "nvmf_set_crdt", 00:20:23.556 "params": { 00:20:23.556 "crdt1": 0, 00:20:23.556 "crdt2": 0, 00:20:23.556 "crdt3": 0 00:20:23.556 } 00:20:23.556 }, 00:20:23.556 { 00:20:23.556 "method": "nvmf_create_transport", 00:20:23.556 "params": { 00:20:23.556 "trtype": "TCP", 00:20:23.556 "max_queue_depth": 128, 00:20:23.556 "max_io_qpairs_per_ctrlr": 127, 00:20:23.556 "in_capsule_data_size": 4096, 00:20:23.556 "max_io_size": 131072, 00:20:23.556 "io_unit_size": 131072, 00:20:23.556 "max_aq_depth": 128, 00:20:23.556 "num_shared_buffers": 511, 00:20:23.556 "buf_cache_size": 4294967295, 00:20:23.556 "dif_insert_or_strip": false, 00:20:23.556 "zcopy": false, 00:20:23.556 "c2h_success": false, 00:20:23.556 "sock_priority": 0, 00:20:23.556 "abort_timeout_sec": 1, 00:20:23.556 "ack_timeout": 0, 00:20:23.556 "data_wr_pool_size": 0 00:20:23.556 } 00:20:23.556 }, 00:20:23.556 { 00:20:23.556 "method": "nvmf_create_subsystem", 00:20:23.556 "params": { 00:20:23.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.556 "allow_any_host": false, 00:20:23.556 "serial_number": "00000000000000000000", 00:20:23.556 "model_number": "SPDK bdev Controller", 00:20:23.556 "max_namespaces": 32, 00:20:23.556 "min_cntlid": 1, 00:20:23.556 "max_cntlid": 65519, 00:20:23.556 "ana_reporting": false 00:20:23.556 } 00:20:23.556 }, 00:20:23.556 { 00:20:23.556 "method": "nvmf_subsystem_add_host", 00:20:23.556 "params": { 00:20:23.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.556 "host": "nqn.2016-06.io.spdk:host1", 00:20:23.556 "psk": "key0" 00:20:23.556 } 00:20:23.556 }, 00:20:23.556 { 00:20:23.556 "method": "nvmf_subsystem_add_ns", 00:20:23.556 "params": { 00:20:23.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.556 "namespace": { 00:20:23.556 "nsid": 1, 00:20:23.556 "bdev_name": "malloc0", 00:20:23.556 "nguid": "C8C592B7F0DA458C9160BB0AEFCB5030", 00:20:23.556 "uuid": "c8c592b7-f0da-458c-9160-bb0aefcb5030", 00:20:23.556 "no_auto_visible": false 00:20:23.556 } 00:20:23.556 } 00:20:23.556 }, 00:20:23.556 { 00:20:23.556 "method": "nvmf_subsystem_add_listener", 00:20:23.556 "params": { 00:20:23.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.556 "listen_address": { 00:20:23.556 "trtype": "TCP", 00:20:23.556 "adrfam": "IPv4", 00:20:23.556 "traddr": "10.0.0.2", 00:20:23.556 "trsvcid": "4420" 00:20:23.556 }, 00:20:23.556 "secure_channel": false, 00:20:23.556 "sock_impl": "ssl" 00:20:23.556 } 00:20:23.556 } 00:20:23.556 ] 00:20:23.556 } 00:20:23.556 ] 00:20:23.556 }' 00:20:23.556 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:23.816 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:23.816 "subsystems": [ 00:20:23.816 { 00:20:23.816 "subsystem": "keyring", 00:20:23.816 "config": [ 00:20:23.816 { 00:20:23.816 "method": "keyring_file_add_key", 00:20:23.816 "params": { 00:20:23.816 "name": "key0", 00:20:23.816 "path": "/tmp/tmp.GkQqcE62Uk" 00:20:23.816 } 00:20:23.816 } 00:20:23.816 ] 00:20:23.816 }, 00:20:23.816 { 00:20:23.816 "subsystem": "iobuf", 00:20:23.816 "config": [ 00:20:23.816 { 00:20:23.816 "method": "iobuf_set_options", 00:20:23.816 "params": { 00:20:23.816 "small_pool_count": 8192, 00:20:23.816 "large_pool_count": 1024, 00:20:23.816 "small_bufsize": 8192, 00:20:23.816 "large_bufsize": 135168 00:20:23.816 } 00:20:23.816 } 00:20:23.816 ] 00:20:23.816 }, 00:20:23.816 { 00:20:23.816 "subsystem": "sock", 00:20:23.816 "config": [ 00:20:23.816 { 00:20:23.816 "method": "sock_set_default_impl", 00:20:23.816 "params": { 00:20:23.816 "impl_name": "posix" 00:20:23.816 } 00:20:23.816 }, 00:20:23.816 { 00:20:23.816 "method": "sock_impl_set_options", 00:20:23.816 "params": { 00:20:23.816 "impl_name": "ssl", 00:20:23.816 "recv_buf_size": 4096, 00:20:23.816 "send_buf_size": 4096, 00:20:23.816 "enable_recv_pipe": true, 00:20:23.816 "enable_quickack": false, 00:20:23.816 "enable_placement_id": 0, 00:20:23.816 "enable_zerocopy_send_server": true, 00:20:23.816 "enable_zerocopy_send_client": false, 00:20:23.816 "zerocopy_threshold": 0, 00:20:23.816 "tls_version": 0, 00:20:23.816 "enable_ktls": false 00:20:23.816 } 00:20:23.816 }, 00:20:23.816 { 00:20:23.816 "method": "sock_impl_set_options", 00:20:23.816 "params": { 00:20:23.816 "impl_name": "posix", 00:20:23.816 "recv_buf_size": 2097152, 00:20:23.816 "send_buf_size": 2097152, 00:20:23.816 "enable_recv_pipe": true, 00:20:23.816 "enable_quickack": false, 00:20:23.816 "enable_placement_id": 0, 00:20:23.816 "enable_zerocopy_send_server": true, 00:20:23.816 "enable_zerocopy_send_client": false, 00:20:23.816 "zerocopy_threshold": 0, 00:20:23.816 "tls_version": 0, 00:20:23.816 "enable_ktls": false 00:20:23.816 } 00:20:23.816 } 00:20:23.816 ] 00:20:23.816 }, 00:20:23.816 { 00:20:23.816 "subsystem": "vmd", 00:20:23.816 "config": [] 00:20:23.816 }, 00:20:23.816 { 00:20:23.816 "subsystem": "accel", 00:20:23.817 "config": [ 00:20:23.817 { 00:20:23.817 "method": "accel_set_options", 00:20:23.817 "params": { 00:20:23.817 "small_cache_size": 128, 00:20:23.817 "large_cache_size": 16, 00:20:23.817 "task_count": 2048, 00:20:23.817 "sequence_count": 2048, 00:20:23.817 "buf_count": 2048 00:20:23.817 } 00:20:23.817 } 00:20:23.817 ] 00:20:23.817 }, 00:20:23.817 { 00:20:23.817 "subsystem": "bdev", 00:20:23.817 "config": [ 00:20:23.817 { 00:20:23.817 "method": "bdev_set_options", 00:20:23.817 "params": { 00:20:23.817 "bdev_io_pool_size": 65535, 00:20:23.817 "bdev_io_cache_size": 256, 00:20:23.817 "bdev_auto_examine": true, 00:20:23.817 "iobuf_small_cache_size": 128, 00:20:23.817 "iobuf_large_cache_size": 16 00:20:23.817 } 00:20:23.817 }, 00:20:23.817 { 00:20:23.817 "method": "bdev_raid_set_options", 00:20:23.817 "params": { 00:20:23.817 "process_window_size_kb": 1024, 00:20:23.817 "process_max_bandwidth_mb_sec": 0 00:20:23.817 } 00:20:23.817 }, 00:20:23.817 { 00:20:23.817 "method": "bdev_iscsi_set_options", 00:20:23.817 "params": { 00:20:23.817 "timeout_sec": 30 00:20:23.817 } 00:20:23.817 }, 00:20:23.817 { 00:20:23.817 "method": "bdev_nvme_set_options", 00:20:23.817 "params": { 00:20:23.817 "action_on_timeout": "none", 00:20:23.817 "timeout_us": 0, 00:20:23.817 "timeout_admin_us": 0, 00:20:23.817 "keep_alive_timeout_ms": 10000, 00:20:23.817 "arbitration_burst": 0, 00:20:23.817 "low_priority_weight": 0, 00:20:23.817 "medium_priority_weight": 0, 00:20:23.817 "high_priority_weight": 0, 00:20:23.817 "nvme_adminq_poll_period_us": 10000, 00:20:23.817 "nvme_ioq_poll_period_us": 0, 00:20:23.817 "io_queue_requests": 512, 00:20:23.817 "delay_cmd_submit": true, 00:20:23.817 "transport_retry_count": 4, 00:20:23.817 "bdev_retry_count": 3, 00:20:23.817 "transport_ack_timeout": 0, 00:20:23.817 "ctrlr_loss_timeout_sec": 0, 00:20:23.817 "reconnect_delay_sec": 0, 00:20:23.817 "fast_io_fail_timeout_sec": 0, 00:20:23.817 "disable_auto_failback": false, 00:20:23.817 "generate_uuids": false, 00:20:23.817 "transport_tos": 0, 00:20:23.817 "nvme_error_stat": false, 00:20:23.817 "rdma_srq_size": 0, 00:20:23.817 "io_path_stat": false, 00:20:23.817 "allow_accel_sequence": false, 00:20:23.817 "rdma_max_cq_size": 0, 00:20:23.817 "rdma_cm_event_timeout_ms": 0, 00:20:23.817 "dhchap_digests": [ 00:20:23.817 "sha256", 00:20:23.817 "sha384", 00:20:23.817 "sha512" 00:20:23.817 ], 00:20:23.817 "dhchap_dhgroups": [ 00:20:23.817 "null", 00:20:23.817 "ffdhe2048", 00:20:23.817 "ffdhe3072", 00:20:23.817 "ffdhe4096", 00:20:23.817 "ffdhe6144", 00:20:23.817 "ffdhe8192" 00:20:23.817 ] 00:20:23.817 } 00:20:23.817 }, 00:20:23.817 { 00:20:23.817 "method": "bdev_nvme_attach_controller", 00:20:23.817 "params": { 00:20:23.817 "name": "nvme0", 00:20:23.817 "trtype": "TCP", 00:20:23.817 "adrfam": "IPv4", 00:20:23.817 "traddr": "10.0.0.2", 00:20:23.817 "trsvcid": "4420", 00:20:23.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.817 "prchk_reftag": false, 00:20:23.817 "prchk_guard": false, 00:20:23.817 "ctrlr_loss_timeout_sec": 0, 00:20:23.817 "reconnect_delay_sec": 0, 00:20:23.817 "fast_io_fail_timeout_sec": 0, 00:20:23.817 "psk": "key0", 00:20:23.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.817 "hdgst": false, 00:20:23.817 "ddgst": false, 00:20:23.817 "multipath": "multipath" 00:20:23.817 } 00:20:23.817 }, 00:20:23.817 { 00:20:23.817 "method": "bdev_nvme_set_hotplug", 00:20:23.817 "params": { 00:20:23.817 "period_us": 100000, 00:20:23.817 "enable": false 00:20:23.817 } 00:20:23.817 }, 00:20:23.817 { 00:20:23.817 "method": "bdev_enable_histogram", 00:20:23.817 "params": { 00:20:23.817 "name": "nvme0n1", 00:20:23.817 "enable": true 00:20:23.817 } 00:20:23.817 }, 00:20:23.817 { 00:20:23.817 "method": "bdev_wait_for_examine" 00:20:23.817 } 00:20:23.817 ] 00:20:23.817 }, 00:20:23.817 { 00:20:23.817 "subsystem": "nbd", 00:20:23.817 "config": [] 00:20:23.817 } 00:20:23.817 ] 00:20:23.817 }' 00:20:23.817 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1669966 00:20:23.817 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1669966 ']' 00:20:23.817 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1669966 00:20:23.817 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:23.817 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:23.817 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1669966 00:20:23.817 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:23.817 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:23.817 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1669966' 00:20:23.817 killing process with pid 1669966 00:20:23.817 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1669966 00:20:23.817 Received shutdown signal, test time was about 1.000000 seconds 00:20:23.817 00:20:23.817 Latency(us) 00:20:23.817 [2024-11-04T11:24:58.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.817 [2024-11-04T11:24:58.387Z] =================================================================================================================== 00:20:23.817 [2024-11-04T11:24:58.387Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:23.817 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1669966 00:20:24.078 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1669912 00:20:24.078 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1669912 ']' 00:20:24.078 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1669912 00:20:24.078 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:24.078 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:24.078 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1669912 00:20:24.078 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:24.078 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:24.078 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1669912' 00:20:24.078 killing process with pid 1669912 00:20:24.078 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1669912 00:20:24.078 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1669912 00:20:24.078 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:24.078 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:24.078 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:24.078 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:24.078 "subsystems": [ 00:20:24.078 { 00:20:24.078 "subsystem": "keyring", 00:20:24.078 "config": [ 00:20:24.078 { 00:20:24.078 "method": "keyring_file_add_key", 00:20:24.078 "params": { 00:20:24.078 "name": "key0", 00:20:24.078 "path": "/tmp/tmp.GkQqcE62Uk" 00:20:24.078 } 00:20:24.078 } 00:20:24.078 ] 00:20:24.078 }, 00:20:24.078 { 00:20:24.078 "subsystem": "iobuf", 00:20:24.078 "config": [ 00:20:24.078 { 00:20:24.078 "method": "iobuf_set_options", 00:20:24.078 "params": { 00:20:24.078 "small_pool_count": 8192, 00:20:24.078 "large_pool_count": 1024, 00:20:24.078 "small_bufsize": 8192, 00:20:24.078 "large_bufsize": 135168 00:20:24.078 } 00:20:24.078 } 00:20:24.078 ] 00:20:24.078 }, 00:20:24.078 { 00:20:24.078 "subsystem": "sock", 00:20:24.078 "config": [ 00:20:24.078 { 00:20:24.078 "method": "sock_set_default_impl", 00:20:24.078 "params": { 00:20:24.078 "impl_name": "posix" 00:20:24.078 } 00:20:24.078 }, 00:20:24.078 { 00:20:24.078 "method": "sock_impl_set_options", 00:20:24.078 "params": { 00:20:24.078 "impl_name": "ssl", 00:20:24.078 "recv_buf_size": 4096, 00:20:24.078 "send_buf_size": 4096, 00:20:24.079 "enable_recv_pipe": true, 00:20:24.079 "enable_quickack": false, 00:20:24.079 "enable_placement_id": 0, 00:20:24.079 "enable_zerocopy_send_server": true, 00:20:24.079 "enable_zerocopy_send_client": false, 00:20:24.079 "zerocopy_threshold": 0, 00:20:24.079 "tls_version": 0, 00:20:24.079 "enable_ktls": false 00:20:24.079 } 00:20:24.079 }, 00:20:24.079 { 00:20:24.079 "method": "sock_impl_set_options", 00:20:24.079 "params": { 00:20:24.079 "impl_name": "posix", 00:20:24.079 "recv_buf_size": 2097152, 00:20:24.079 "send_buf_size": 2097152, 00:20:24.079 "enable_recv_pipe": true, 00:20:24.079 "enable_quickack": false, 00:20:24.079 "enable_placement_id": 0, 00:20:24.079 "enable_zerocopy_send_server": true, 00:20:24.079 "enable_zerocopy_send_client": false, 00:20:24.079 "zerocopy_threshold": 0, 00:20:24.079 "tls_version": 0, 00:20:24.079 "enable_ktls": false 00:20:24.079 } 00:20:24.079 } 00:20:24.079 ] 00:20:24.079 }, 00:20:24.079 { 00:20:24.079 "subsystem": "vmd", 00:20:24.079 "config": [] 00:20:24.079 }, 00:20:24.079 { 00:20:24.079 "subsystem": "accel", 00:20:24.079 "config": [ 00:20:24.079 { 00:20:24.079 "method": "accel_set_options", 00:20:24.079 "params": { 00:20:24.079 "small_cache_size": 128, 00:20:24.079 "large_cache_size": 16, 00:20:24.079 "task_count": 2048, 00:20:24.079 "sequence_count": 2048, 00:20:24.079 "buf_count": 2048 00:20:24.079 } 00:20:24.079 } 00:20:24.079 ] 00:20:24.079 }, 00:20:24.079 { 00:20:24.079 "subsystem": "bdev", 00:20:24.079 "config": [ 00:20:24.079 { 00:20:24.079 "method": "bdev_set_options", 00:20:24.079 "params": { 00:20:24.079 "bdev_io_pool_size": 65535, 00:20:24.079 "bdev_io_cache_size": 256, 00:20:24.079 "bdev_auto_examine": true, 00:20:24.079 "iobuf_small_cache_size": 128, 00:20:24.079 "iobuf_large_cache_size": 16 00:20:24.079 } 00:20:24.079 }, 00:20:24.079 { 00:20:24.079 "method": "bdev_raid_set_options", 00:20:24.079 "params": { 00:20:24.079 "process_window_size_kb": 1024, 00:20:24.079 "process_max_bandwidth_mb_sec": 0 00:20:24.079 } 00:20:24.079 }, 00:20:24.079 { 00:20:24.079 "method": "bdev_iscsi_set_options", 00:20:24.079 "params": { 00:20:24.079 "timeout_sec": 30 00:20:24.079 } 00:20:24.079 }, 00:20:24.079 { 00:20:24.079 "method": "bdev_nvme_set_options", 00:20:24.079 "params": { 00:20:24.079 "action_on_timeout": "none", 00:20:24.079 "timeout_us": 0, 00:20:24.079 "timeout_admin_us": 0, 00:20:24.079 "keep_alive_timeout_ms": 10000, 00:20:24.079 "arbitration_burst": 0, 00:20:24.079 "low_priority_weight": 0, 00:20:24.079 "medium_priority_weight": 0, 00:20:24.079 "high_priority_weight": 0, 00:20:24.079 "nvme_adminq_poll_period_us": 10000, 00:20:24.079 "nvme_ioq_poll_period_us": 0, 00:20:24.079 "io_queue_requests": 0, 00:20:24.079 "delay_cmd_submit": true, 00:20:24.079 "transport_retry_count": 4, 00:20:24.079 "bdev_retry_count": 3, 00:20:24.079 "transport_ack_timeout": 0, 00:20:24.079 "ctrlr_loss_timeout_sec": 0, 00:20:24.079 "reconnect_delay_sec": 0, 00:20:24.079 "fast_io_fail_timeout_sec": 0, 00:20:24.079 "disable_auto_failback": false, 00:20:24.079 "generate_uuids": false, 00:20:24.079 "transport_tos": 0, 00:20:24.079 "nvme_error_stat": false, 00:20:24.079 "rdma_srq_size": 0, 00:20:24.079 "io_path_stat": false, 00:20:24.079 "allow_accel_sequence": false, 00:20:24.079 "rdma_max_cq_size": 0, 00:20:24.079 "rdma_cm_event_timeout_ms": 0, 00:20:24.079 "dhchap_digests": [ 00:20:24.079 "sha256", 00:20:24.079 "sha384", 00:20:24.079 "sha512" 00:20:24.079 ], 00:20:24.079 "dhchap_dhgroups": [ 00:20:24.079 "null", 00:20:24.079 "ffdhe2048", 00:20:24.079 "ffdhe3072", 00:20:24.079 "ffdhe4096", 00:20:24.079 "ffdhe6144", 00:20:24.079 "ffdhe8192" 00:20:24.079 ] 00:20:24.079 } 00:20:24.079 }, 00:20:24.079 { 00:20:24.079 "method": "bdev_nvme_set_hotplug", 00:20:24.079 "params": { 00:20:24.079 "period_us": 100000, 00:20:24.079 "enable": false 00:20:24.079 } 00:20:24.079 }, 00:20:24.079 { 00:20:24.079 "method": "bdev_malloc_create", 00:20:24.079 "params": { 00:20:24.079 "name": "malloc0", 00:20:24.079 "num_blocks": 8192, 00:20:24.079 "block_size": 4096, 00:20:24.079 "physical_block_size": 4096, 00:20:24.079 "uuid": "c8c592b7-f0da-458c-9160-bb0aefcb5030", 00:20:24.079 "optimal_io_boundary": 0, 00:20:24.079 "md_size": 0, 00:20:24.079 "dif_type": 0, 00:20:24.079 "dif_is_head_of_md": false, 00:20:24.079 "dif_pi_format": 0 00:20:24.079 } 00:20:24.079 }, 00:20:24.079 { 00:20:24.079 "method": "bdev_wait_for_examine" 00:20:24.079 } 00:20:24.079 ] 00:20:24.079 }, 00:20:24.079 { 00:20:24.079 "subsystem": "nbd", 00:20:24.079 "config": [] 00:20:24.079 }, 00:20:24.079 { 00:20:24.079 "subsystem": "scheduler", 00:20:24.079 "config": [ 00:20:24.079 { 00:20:24.079 "method": "framework_set_scheduler", 00:20:24.079 "params": { 00:20:24.079 "name": "static" 00:20:24.079 } 00:20:24.079 } 00:20:24.079 ] 00:20:24.079 }, 00:20:24.079 { 00:20:24.079 "subsystem": "nvmf", 00:20:24.079 "config": [ 00:20:24.079 { 00:20:24.079 "method": "nvmf_set_config", 00:20:24.079 "params": { 00:20:24.079 "discovery_filter": "match_any", 00:20:24.079 "admin_cmd_passthru": { 00:20:24.079 "identify_ctrlr": false 00:20:24.079 }, 00:20:24.079 "dhchap_digests": [ 00:20:24.079 "sha256", 00:20:24.079 "sha384", 00:20:24.079 "sha512" 00:20:24.079 ], 00:20:24.079 "dhchap_dhgroups": [ 00:20:24.079 "null", 00:20:24.079 "ffdhe2048", 00:20:24.079 "ffdhe3072", 00:20:24.079 "ffdhe4096", 00:20:24.079 "ffdhe6144", 00:20:24.079 "ffdhe8192" 00:20:24.079 ] 00:20:24.079 } 00:20:24.079 }, 00:20:24.079 { 00:20:24.079 "method": "nvmf_set_max_subsystems", 00:20:24.079 "params": { 00:20:24.079 "max_subsystems": 1024 00:20:24.079 } 00:20:24.079 }, 00:20:24.079 { 00:20:24.079 "method": "nvmf_set_crdt", 00:20:24.079 "params": { 00:20:24.079 "crdt1": 0, 00:20:24.079 "crdt2": 0, 00:20:24.079 "crdt3": 0 00:20:24.079 } 00:20:24.079 }, 00:20:24.079 { 00:20:24.079 "method": "nvmf_create_transport", 00:20:24.079 "params": { 00:20:24.079 "trtype": "TCP", 00:20:24.079 "max_queue_depth": 128, 00:20:24.079 "max_io_qpairs_per_ctrlr": 127, 00:20:24.079 "in_capsule_data_size": 4096, 00:20:24.079 "max_io_size": 131072, 00:20:24.079 "io_unit_size": 131072, 00:20:24.079 "max_aq_depth": 128, 00:20:24.079 "num_shared_buffers": 511, 00:20:24.079 "buf_cache_size": 4294967295, 00:20:24.079 "dif_insert_or_strip": false, 00:20:24.079 "zcopy": false, 00:20:24.079 "c2h_success": false, 00:20:24.079 "sock_priority": 0, 00:20:24.079 "abort_timeout_sec": 1, 00:20:24.079 "ack_timeout": 0, 00:20:24.079 "data_wr_pool_size": 0 00:20:24.079 } 00:20:24.079 }, 00:20:24.079 { 00:20:24.079 "method": "nvmf_create_subsystem", 00:20:24.079 "params": { 00:20:24.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.079 "allow_any_host": false, 00:20:24.079 "serial_number": "00000000000000000000", 00:20:24.079 "model_number": "SPDK bdev Controller", 00:20:24.079 "max_namespaces": 32, 00:20:24.079 "min_cntlid": 1, 00:20:24.079 "max_cntlid": 65519, 00:20:24.080 "ana_reporting": false 00:20:24.080 } 00:20:24.080 }, 00:20:24.080 { 00:20:24.080 "method": "nvmf_subsystem_add_host", 00:20:24.080 "params": { 00:20:24.080 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.080 "host": "nqn.2016-06.io.spdk:host1", 00:20:24.080 "psk": "key0" 00:20:24.080 } 00:20:24.080 }, 00:20:24.080 { 00:20:24.080 "method": "nvmf_subsystem_add_ns", 00:20:24.080 "params": { 00:20:24.080 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.080 "namespace": { 00:20:24.080 "nsid": 1, 00:20:24.080 "bdev_name": "malloc0", 00:20:24.080 "nguid": "C8C592B7F0DA458C9160BB0AEFCB5030", 00:20:24.080 "uuid": "c8c592b7-f0da-458c-9160-bb0aefcb5030", 00:20:24.080 "no_auto_visible": false 00:20:24.080 } 00:20:24.080 } 00:20:24.080 }, 00:20:24.080 { 00:20:24.080 "method": "nvmf_subsystem_add_listener", 00:20:24.080 "params": { 00:20:24.080 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.080 "listen_address": { 00:20:24.080 "trtype": "TCP", 00:20:24.080 "adrfam": "IPv4", 00:20:24.080 "traddr": "10.0.0.2", 00:20:24.080 "trsvcid": "4420" 00:20:24.080 }, 00:20:24.080 "secure_channel": false, 00:20:24.080 "sock_impl": "ssl" 00:20:24.080 } 00:20:24.080 } 00:20:24.080 ] 00:20:24.080 } 00:20:24.080 ] 00:20:24.080 }' 00:20:24.080 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.080 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1670651 00:20:24.080 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1670651 00:20:24.080 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:24.080 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1670651 ']' 00:20:24.080 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.080 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:24.080 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.080 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:24.080 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.340 [2024-11-04 12:24:58.680335] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:20:24.340 [2024-11-04 12:24:58.680394] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.340 [2024-11-04 12:24:58.745642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.340 [2024-11-04 12:24:58.781094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.340 [2024-11-04 12:24:58.781132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.340 [2024-11-04 12:24:58.781140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.341 [2024-11-04 12:24:58.781147] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.341 [2024-11-04 12:24:58.781153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.341 [2024-11-04 12:24:58.781753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.601 [2024-11-04 12:24:58.980100] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.601 [2024-11-04 12:24:59.012111] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:24.601 [2024-11-04 12:24:59.012335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.172 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:25.172 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:25.172 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:25.172 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:25.172 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.172 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.172 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1670850 00:20:25.172 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1670850 /var/tmp/bdevperf.sock 00:20:25.172 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1670850 ']' 00:20:25.172 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.172 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:25.172 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.172 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:25.172 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:25.172 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.173 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:25.173 "subsystems": [ 00:20:25.173 { 00:20:25.173 "subsystem": "keyring", 00:20:25.173 "config": [ 00:20:25.173 { 00:20:25.173 "method": "keyring_file_add_key", 00:20:25.173 "params": { 00:20:25.173 "name": "key0", 00:20:25.173 "path": "/tmp/tmp.GkQqcE62Uk" 00:20:25.173 } 00:20:25.173 } 00:20:25.173 ] 00:20:25.173 }, 00:20:25.173 { 00:20:25.173 "subsystem": "iobuf", 00:20:25.173 "config": [ 00:20:25.173 { 00:20:25.173 "method": "iobuf_set_options", 00:20:25.173 "params": { 00:20:25.173 "small_pool_count": 8192, 00:20:25.173 "large_pool_count": 1024, 00:20:25.173 "small_bufsize": 8192, 00:20:25.173 "large_bufsize": 135168 00:20:25.173 } 00:20:25.173 } 00:20:25.173 ] 00:20:25.173 }, 00:20:25.173 { 00:20:25.173 "subsystem": "sock", 00:20:25.173 "config": [ 00:20:25.173 { 00:20:25.173 "method": "sock_set_default_impl", 00:20:25.173 "params": { 00:20:25.173 "impl_name": "posix" 00:20:25.173 } 00:20:25.173 }, 00:20:25.173 { 00:20:25.173 "method": "sock_impl_set_options", 00:20:25.173 "params": { 00:20:25.173 "impl_name": "ssl", 00:20:25.173 "recv_buf_size": 4096, 00:20:25.173 "send_buf_size": 4096, 00:20:25.173 "enable_recv_pipe": true, 00:20:25.173 "enable_quickack": false, 00:20:25.173 "enable_placement_id": 0, 00:20:25.173 "enable_zerocopy_send_server": true, 00:20:25.173 "enable_zerocopy_send_client": false, 00:20:25.173 "zerocopy_threshold": 0, 00:20:25.173 "tls_version": 0, 00:20:25.173 "enable_ktls": false 00:20:25.173 } 00:20:25.173 }, 00:20:25.173 { 00:20:25.173 "method": "sock_impl_set_options", 00:20:25.173 "params": { 00:20:25.173 "impl_name": "posix", 00:20:25.173 "recv_buf_size": 2097152, 00:20:25.173 "send_buf_size": 2097152, 00:20:25.173 "enable_recv_pipe": true, 00:20:25.173 "enable_quickack": false, 00:20:25.173 "enable_placement_id": 0, 00:20:25.173 "enable_zerocopy_send_server": true, 00:20:25.173 "enable_zerocopy_send_client": false, 00:20:25.173 "zerocopy_threshold": 0, 00:20:25.173 "tls_version": 0, 00:20:25.173 "enable_ktls": false 00:20:25.173 } 00:20:25.173 } 00:20:25.173 ] 00:20:25.173 }, 00:20:25.173 { 00:20:25.173 "subsystem": "vmd", 00:20:25.173 "config": [] 00:20:25.173 }, 00:20:25.173 { 00:20:25.173 "subsystem": "accel", 00:20:25.173 "config": [ 00:20:25.173 { 00:20:25.173 "method": "accel_set_options", 00:20:25.173 "params": { 00:20:25.173 "small_cache_size": 128, 00:20:25.173 "large_cache_size": 16, 00:20:25.173 "task_count": 2048, 00:20:25.173 "sequence_count": 2048, 00:20:25.173 "buf_count": 2048 00:20:25.173 } 00:20:25.173 } 00:20:25.173 ] 00:20:25.173 }, 00:20:25.173 { 00:20:25.173 "subsystem": "bdev", 00:20:25.173 "config": [ 00:20:25.173 { 00:20:25.173 "method": "bdev_set_options", 00:20:25.173 "params": { 00:20:25.173 "bdev_io_pool_size": 65535, 00:20:25.173 "bdev_io_cache_size": 256, 00:20:25.173 "bdev_auto_examine": true, 00:20:25.173 "iobuf_small_cache_size": 128, 00:20:25.173 "iobuf_large_cache_size": 16 00:20:25.173 } 00:20:25.173 }, 00:20:25.173 { 00:20:25.173 "method": "bdev_raid_set_options", 00:20:25.173 "params": { 00:20:25.173 "process_window_size_kb": 1024, 00:20:25.173 "process_max_bandwidth_mb_sec": 0 00:20:25.173 } 00:20:25.173 }, 00:20:25.173 { 00:20:25.173 "method": "bdev_iscsi_set_options", 00:20:25.173 "params": { 00:20:25.173 "timeout_sec": 30 00:20:25.173 } 00:20:25.173 }, 00:20:25.173 { 00:20:25.173 "method": "bdev_nvme_set_options", 00:20:25.173 "params": { 00:20:25.173 "action_on_timeout": "none", 00:20:25.173 "timeout_us": 0, 00:20:25.173 "timeout_admin_us": 0, 00:20:25.173 "keep_alive_timeout_ms": 10000, 00:20:25.173 "arbitration_burst": 0, 00:20:25.173 "low_priority_weight": 0, 00:20:25.173 "medium_priority_weight": 0, 00:20:25.173 "high_priority_weight": 0, 00:20:25.173 "nvme_adminq_poll_period_us": 10000, 00:20:25.173 "nvme_ioq_poll_period_us": 0, 00:20:25.173 "io_queue_requests": 512, 00:20:25.173 "delay_cmd_submit": true, 00:20:25.173 "transport_retry_count": 4, 00:20:25.173 "bdev_retry_count": 3, 00:20:25.173 "transport_ack_timeout": 0, 00:20:25.173 "ctrlr_loss_timeout_sec": 0, 00:20:25.173 "reconnect_delay_sec": 0, 00:20:25.173 "fast_io_fail_timeout_sec": 0, 00:20:25.173 "disable_auto_failback": false, 00:20:25.173 "generate_uuids": false, 00:20:25.173 "transport_tos": 0, 00:20:25.173 "nvme_error_stat": false, 00:20:25.173 "rdma_srq_size": 0, 00:20:25.173 "io_path_stat": false, 00:20:25.173 "allow_accel_sequence": false, 00:20:25.173 "rdma_max_cq_size": 0, 00:20:25.173 "rdma_cm_event_timeout_ms": 0, 00:20:25.173 "dhchap_digests": [ 00:20:25.173 "sha256", 00:20:25.173 "sha384", 00:20:25.173 "sha512" 00:20:25.173 ], 00:20:25.173 "dhchap_dhgroups": [ 00:20:25.173 "null", 00:20:25.173 "ffdhe2048", 00:20:25.173 "ffdhe3072", 00:20:25.173 "ffdhe4096", 00:20:25.173 "ffdhe6144", 00:20:25.173 "ffdhe8192" 00:20:25.173 ] 00:20:25.174 } 00:20:25.174 }, 00:20:25.174 { 00:20:25.174 "method": "bdev_nvme_attach_controller", 00:20:25.174 "params": { 00:20:25.174 "name": "nvme0", 00:20:25.174 "trtype": "TCP", 00:20:25.174 "adrfam": "IPv4", 00:20:25.174 "traddr": "10.0.0.2", 00:20:25.174 "trsvcid": "4420", 00:20:25.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.174 "prchk_reftag": false, 00:20:25.174 "prchk_guard": false, 00:20:25.174 "ctrlr_loss_timeout_sec": 0, 00:20:25.174 "reconnect_delay_sec": 0, 00:20:25.174 "fast_io_fail_timeout_sec": 0, 00:20:25.174 "psk": "key0", 00:20:25.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.174 "hdgst": false, 00:20:25.174 "ddgst": false, 00:20:25.174 "multipath": "multipath" 00:20:25.174 } 00:20:25.174 }, 00:20:25.174 { 00:20:25.174 "method": "bdev_nvme_set_hotplug", 00:20:25.174 "params": { 00:20:25.174 "period_us": 100000, 00:20:25.174 "enable": false 00:20:25.174 } 00:20:25.174 }, 00:20:25.174 { 00:20:25.174 "method": "bdev_enable_histogram", 00:20:25.174 "params": { 00:20:25.174 "name": "nvme0n1", 00:20:25.174 "enable": true 00:20:25.174 } 00:20:25.174 }, 00:20:25.174 { 00:20:25.174 "method": "bdev_wait_for_examine" 00:20:25.174 } 00:20:25.174 ] 00:20:25.174 }, 00:20:25.174 { 00:20:25.174 "subsystem": "nbd", 00:20:25.174 "config": [] 00:20:25.174 } 00:20:25.174 ] 00:20:25.174 }' 00:20:25.174 [2024-11-04 12:24:59.561279] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:20:25.174 [2024-11-04 12:24:59.561333] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1670850 ] 00:20:25.174 [2024-11-04 12:24:59.635965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.174 [2024-11-04 12:24:59.666721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.434 [2024-11-04 12:24:59.801202] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.005 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:26.005 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:26.005 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:26.005 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:26.005 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.005 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:26.264 Running I/O for 1 seconds... 00:20:27.203 4219.00 IOPS, 16.48 MiB/s 00:20:27.203 Latency(us) 00:20:27.203 [2024-11-04T11:25:01.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.203 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:27.203 Verification LBA range: start 0x0 length 0x2000 00:20:27.203 nvme0n1 : 1.02 4246.74 16.59 0.00 0.00 29874.68 5734.40 65099.09 00:20:27.203 [2024-11-04T11:25:01.773Z] =================================================================================================================== 00:20:27.203 [2024-11-04T11:25:01.773Z] Total : 4246.74 16.59 0.00 0.00 29874.68 5734.40 65099.09 00:20:27.203 { 00:20:27.203 "results": [ 00:20:27.203 { 00:20:27.203 "job": "nvme0n1", 00:20:27.203 "core_mask": "0x2", 00:20:27.203 "workload": "verify", 00:20:27.203 "status": "finished", 00:20:27.203 "verify_range": { 00:20:27.203 "start": 0, 00:20:27.203 "length": 8192 00:20:27.203 }, 00:20:27.203 "queue_depth": 128, 00:20:27.203 "io_size": 4096, 00:20:27.203 "runtime": 1.023608, 00:20:27.203 "iops": 4246.742893764019, 00:20:27.203 "mibps": 16.5888394287657, 00:20:27.203 "io_failed": 0, 00:20:27.203 "io_timeout": 0, 00:20:27.203 "avg_latency_us": 29874.677438846713, 00:20:27.203 "min_latency_us": 5734.4, 00:20:27.203 "max_latency_us": 65099.09333333333 00:20:27.203 } 00:20:27.203 ], 00:20:27.203 "core_count": 1 00:20:27.203 } 00:20:27.203 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:27.203 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:27.203 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:27.203 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:27.203 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:27.203 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:27.203 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:27.203 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:27.203 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:27.203 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:27.203 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:27.203 nvmf_trace.0 00:20:27.203 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:27.203 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1670850 00:20:27.203 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1670850 ']' 00:20:27.203 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1670850 00:20:27.203 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:27.203 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.204 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1670850 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1670850' 00:20:27.464 killing process with pid 1670850 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1670850 00:20:27.464 Received shutdown signal, test time was about 1.000000 seconds 00:20:27.464 00:20:27.464 Latency(us) 00:20:27.464 [2024-11-04T11:25:02.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.464 [2024-11-04T11:25:02.034Z] =================================================================================================================== 00:20:27.464 [2024-11-04T11:25:02.034Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1670850 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:27.464 rmmod nvme_tcp 00:20:27.464 rmmod nvme_fabrics 00:20:27.464 rmmod nvme_keyring 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 1670651 ']' 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 1670651 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1670651 ']' 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1670651 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.464 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1670651 00:20:27.724 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:27.725 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:27.725 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1670651' 00:20:27.725 killing process with pid 1670651 00:20:27.725 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1670651 00:20:27.725 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1670651 00:20:27.725 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:27.725 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:27.725 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:27.725 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:27.725 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:20:27.725 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:27.725 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:20:27.725 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:27.725 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:27.725 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.725 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.725 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.zwWl0rMdDr /tmp/tmp.euVQcnCXlV /tmp/tmp.GkQqcE62Uk 00:20:30.268 00:20:30.268 real 1m21.397s 00:20:30.268 user 2m6.329s 00:20:30.268 sys 0m26.368s 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.268 ************************************ 00:20:30.268 END TEST nvmf_tls 00:20:30.268 ************************************ 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:30.268 ************************************ 00:20:30.268 START TEST nvmf_fips 00:20:30.268 ************************************ 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:30.268 * Looking for test storage... 00:20:30.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:30.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.268 --rc genhtml_branch_coverage=1 00:20:30.268 --rc genhtml_function_coverage=1 00:20:30.268 --rc genhtml_legend=1 00:20:30.268 --rc geninfo_all_blocks=1 00:20:30.268 --rc geninfo_unexecuted_blocks=1 00:20:30.268 00:20:30.268 ' 00:20:30.268 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:30.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.268 --rc genhtml_branch_coverage=1 00:20:30.268 --rc genhtml_function_coverage=1 00:20:30.268 --rc genhtml_legend=1 00:20:30.268 --rc geninfo_all_blocks=1 00:20:30.268 --rc geninfo_unexecuted_blocks=1 00:20:30.268 00:20:30.268 ' 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:30.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.269 --rc genhtml_branch_coverage=1 00:20:30.269 --rc genhtml_function_coverage=1 00:20:30.269 --rc genhtml_legend=1 00:20:30.269 --rc geninfo_all_blocks=1 00:20:30.269 --rc geninfo_unexecuted_blocks=1 00:20:30.269 00:20:30.269 ' 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:30.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.269 --rc genhtml_branch_coverage=1 00:20:30.269 --rc genhtml_function_coverage=1 00:20:30.269 --rc genhtml_legend=1 00:20:30.269 --rc geninfo_all_blocks=1 00:20:30.269 --rc geninfo_unexecuted_blocks=1 00:20:30.269 00:20:30.269 ' 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:30.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:30.269 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:30.270 Error setting digest 00:20:30.270 40A25C6C7E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:30.270 40A25C6C7E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:30.270 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:38.405 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:38.405 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.405 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:38.406 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:38.406 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:38.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:20:38.406 00:20:38.406 --- 10.0.0.2 ping statistics --- 00:20:38.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.406 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:20:38.406 00:20:38.406 --- 10.0.0.1 ping statistics --- 00:20:38.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.406 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=1675575 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 1675575 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1675575 ']' 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:38.406 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.406 [2024-11-04 12:25:12.054497] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:20:38.406 [2024-11-04 12:25:12.054575] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.406 [2024-11-04 12:25:12.143593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.406 [2024-11-04 12:25:12.194095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.406 [2024-11-04 12:25:12.194150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.406 [2024-11-04 12:25:12.194159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.406 [2024-11-04 12:25:12.194166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.406 [2024-11-04 12:25:12.194173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.406 [2024-11-04 12:25:12.194974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.406 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:38.406 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:38.406 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:38.406 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:38.406 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.406 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.406 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:38.406 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:38.406 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:38.406 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.yDj 00:20:38.406 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:38.406 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.yDj 00:20:38.406 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.yDj 00:20:38.406 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.yDj 00:20:38.406 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:38.668 [2024-11-04 12:25:13.065151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.668 [2024-11-04 12:25:13.081148] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:38.668 [2024-11-04 12:25:13.081435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.668 malloc0 00:20:38.668 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:38.668 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1675737 00:20:38.668 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1675737 /var/tmp/bdevperf.sock 00:20:38.668 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:38.668 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1675737 ']' 00:20:38.668 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.668 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:38.668 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.668 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:38.668 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.668 [2024-11-04 12:25:13.215423] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:20:38.668 [2024-11-04 12:25:13.215499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1675737 ] 00:20:38.927 [2024-11-04 12:25:13.273835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.927 [2024-11-04 12:25:13.311696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.497 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:39.497 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:39.497 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.yDj 00:20:39.757 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:40.018 [2024-11-04 12:25:14.327053] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.018 TLSTESTn1 00:20:40.018 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:40.018 Running I/O for 10 seconds... 00:20:42.339 6081.00 IOPS, 23.75 MiB/s [2024-11-04T11:25:17.574Z] 5755.50 IOPS, 22.48 MiB/s [2024-11-04T11:25:18.603Z] 5232.33 IOPS, 20.44 MiB/s [2024-11-04T11:25:19.543Z] 5233.25 IOPS, 20.44 MiB/s [2024-11-04T11:25:20.925Z] 5137.00 IOPS, 20.07 MiB/s [2024-11-04T11:25:21.867Z] 5110.00 IOPS, 19.96 MiB/s [2024-11-04T11:25:22.808Z] 5032.43 IOPS, 19.66 MiB/s [2024-11-04T11:25:23.748Z] 5003.88 IOPS, 19.55 MiB/s [2024-11-04T11:25:24.689Z] 5022.78 IOPS, 19.62 MiB/s [2024-11-04T11:25:24.689Z] 5008.60 IOPS, 19.56 MiB/s 00:20:50.119 Latency(us) 00:20:50.119 [2024-11-04T11:25:24.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.119 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:50.119 Verification LBA range: start 0x0 length 0x2000 00:20:50.119 TLSTESTn1 : 10.01 5014.01 19.59 0.00 0.00 25495.02 6444.37 39103.15 00:20:50.119 [2024-11-04T11:25:24.689Z] =================================================================================================================== 00:20:50.119 [2024-11-04T11:25:24.689Z] Total : 5014.01 19.59 0.00 0.00 25495.02 6444.37 39103.15 00:20:50.119 { 00:20:50.119 "results": [ 00:20:50.119 { 00:20:50.119 "job": "TLSTESTn1", 00:20:50.119 "core_mask": "0x4", 00:20:50.119 "workload": "verify", 00:20:50.119 "status": "finished", 00:20:50.119 "verify_range": { 00:20:50.119 "start": 0, 00:20:50.119 "length": 8192 00:20:50.119 }, 00:20:50.119 "queue_depth": 128, 00:20:50.119 "io_size": 4096, 00:20:50.119 "runtime": 10.014546, 00:20:50.119 "iops": 5014.006625961876, 00:20:50.119 "mibps": 19.58596338266358, 00:20:50.119 "io_failed": 0, 00:20:50.119 "io_timeout": 0, 00:20:50.119 "avg_latency_us": 25495.02054155962, 00:20:50.119 "min_latency_us": 6444.373333333333, 00:20:50.119 "max_latency_us": 39103.14666666667 00:20:50.119 } 00:20:50.120 ], 00:20:50.120 "core_count": 1 00:20:50.120 } 00:20:50.120 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:50.120 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:50.120 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:50.120 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:50.120 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:50.120 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:50.120 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:50.120 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:50.120 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:50.120 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:50.120 nvmf_trace.0 00:20:50.120 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:50.120 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1675737 00:20:50.120 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1675737 ']' 00:20:50.120 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1675737 00:20:50.120 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:50.120 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:50.120 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1675737 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1675737' 00:20:50.381 killing process with pid 1675737 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1675737 00:20:50.381 Received shutdown signal, test time was about 10.000000 seconds 00:20:50.381 00:20:50.381 Latency(us) 00:20:50.381 [2024-11-04T11:25:24.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.381 [2024-11-04T11:25:24.951Z] =================================================================================================================== 00:20:50.381 [2024-11-04T11:25:24.951Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1675737 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:50.381 rmmod nvme_tcp 00:20:50.381 rmmod nvme_fabrics 00:20:50.381 rmmod nvme_keyring 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 1675575 ']' 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 1675575 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1675575 ']' 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1675575 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:50.381 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1675575 00:20:50.642 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:50.642 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:50.642 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1675575' 00:20:50.642 killing process with pid 1675575 00:20:50.642 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1675575 00:20:50.642 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1675575 00:20:50.642 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:50.642 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:50.642 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:50.642 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:50.642 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:20:50.642 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:50.642 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:20:50.642 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:50.642 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:50.642 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.642 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.642 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.yDj 00:20:53.186 00:20:53.186 real 0m22.796s 00:20:53.186 user 0m24.041s 00:20:53.186 sys 0m9.947s 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:53.186 ************************************ 00:20:53.186 END TEST nvmf_fips 00:20:53.186 ************************************ 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:53.186 ************************************ 00:20:53.186 START TEST nvmf_control_msg_list 00:20:53.186 ************************************ 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:53.186 * Looking for test storage... 00:20:53.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:53.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.186 --rc genhtml_branch_coverage=1 00:20:53.186 --rc genhtml_function_coverage=1 00:20:53.186 --rc genhtml_legend=1 00:20:53.186 --rc geninfo_all_blocks=1 00:20:53.186 --rc geninfo_unexecuted_blocks=1 00:20:53.186 00:20:53.186 ' 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:53.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.186 --rc genhtml_branch_coverage=1 00:20:53.186 --rc genhtml_function_coverage=1 00:20:53.186 --rc genhtml_legend=1 00:20:53.186 --rc geninfo_all_blocks=1 00:20:53.186 --rc geninfo_unexecuted_blocks=1 00:20:53.186 00:20:53.186 ' 00:20:53.186 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:53.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.186 --rc genhtml_branch_coverage=1 00:20:53.186 --rc genhtml_function_coverage=1 00:20:53.186 --rc genhtml_legend=1 00:20:53.187 --rc geninfo_all_blocks=1 00:20:53.187 --rc geninfo_unexecuted_blocks=1 00:20:53.187 00:20:53.187 ' 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:53.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.187 --rc genhtml_branch_coverage=1 00:20:53.187 --rc genhtml_function_coverage=1 00:20:53.187 --rc genhtml_legend=1 00:20:53.187 --rc geninfo_all_blocks=1 00:20:53.187 --rc geninfo_unexecuted_blocks=1 00:20:53.187 00:20:53.187 ' 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:53.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:53.187 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:01.334 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:01.334 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:01.334 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:01.334 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:01.334 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:01.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:21:01.335 00:21:01.335 --- 10.0.0.2 ping statistics --- 00:21:01.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.335 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:01.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:21:01.335 00:21:01.335 --- 10.0.0.1 ping statistics --- 00:21:01.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.335 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=1682286 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 1682286 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 1682286 ']' 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:01.335 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.335 [2024-11-04 12:25:34.920146] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:21:01.335 [2024-11-04 12:25:34.920232] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.335 [2024-11-04 12:25:34.992582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.335 [2024-11-04 12:25:35.034199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.335 [2024-11-04 12:25:35.034239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.335 [2024-11-04 12:25:35.034247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.335 [2024-11-04 12:25:35.034254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.335 [2024-11-04 12:25:35.034260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.335 [2024-11-04 12:25:35.034885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.335 [2024-11-04 12:25:35.748591] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.335 Malloc0 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.335 [2024-11-04 12:25:35.799510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1682441 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1682442 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1682443 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1682441 00:21:01.335 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:01.336 [2024-11-04 12:25:35.869894] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:01.336 [2024-11-04 12:25:35.879840] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:01.336 [2024-11-04 12:25:35.889883] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:02.719 Initializing NVMe Controllers 00:21:02.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:02.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:02.719 Initialization complete. Launching workers. 00:21:02.719 ======================================================== 00:21:02.719 Latency(us) 00:21:02.719 Device Information : IOPS MiB/s Average min max 00:21:02.719 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1562.00 6.10 639.92 287.89 41976.52 00:21:02.719 ======================================================== 00:21:02.719 Total : 1562.00 6.10 639.92 287.89 41976.52 00:21:02.719 00:21:02.719 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1682442 00:21:02.719 Initializing NVMe Controllers 00:21:02.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:02.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:02.719 Initialization complete. Launching workers. 00:21:02.719 ======================================================== 00:21:02.719 Latency(us) 00:21:02.719 Device Information : IOPS MiB/s Average min max 00:21:02.719 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 53.00 0.21 19027.30 251.84 42073.30 00:21:02.719 ======================================================== 00:21:02.719 Total : 53.00 0.21 19027.30 251.84 42073.30 00:21:02.719 00:21:02.719 [2024-11-04 12:25:36.981660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a210 is same with the state(6) to be set 00:21:02.719 Initializing NVMe Controllers 00:21:02.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:02.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:02.719 Initialization complete. Launching workers. 00:21:02.719 ======================================================== 00:21:02.719 Latency(us) 00:21:02.719 Device Information : IOPS MiB/s Average min max 00:21:02.719 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1575.00 6.15 654.54 159.44 41201.38 00:21:02.719 ======================================================== 00:21:02.720 Total : 1575.00 6.15 654.54 159.44 41201.38 00:21:02.720 00:21:02.720 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1682443 00:21:02.720 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:02.720 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:02.720 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:02.720 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:02.720 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:02.720 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:02.720 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:02.720 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:02.720 rmmod nvme_tcp 00:21:02.720 rmmod nvme_fabrics 00:21:02.720 rmmod nvme_keyring 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 1682286 ']' 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 1682286 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 1682286 ']' 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 1682286 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1682286 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1682286' 00:21:02.720 killing process with pid 1682286 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 1682286 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 1682286 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.720 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.262 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:05.262 00:21:05.262 real 0m12.140s 00:21:05.262 user 0m7.697s 00:21:05.262 sys 0m6.374s 00:21:05.262 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:05.262 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:05.262 ************************************ 00:21:05.262 END TEST nvmf_control_msg_list 00:21:05.263 ************************************ 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:05.263 ************************************ 00:21:05.263 START TEST nvmf_wait_for_buf 00:21:05.263 ************************************ 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:05.263 * Looking for test storage... 00:21:05.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:05.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.263 --rc genhtml_branch_coverage=1 00:21:05.263 --rc genhtml_function_coverage=1 00:21:05.263 --rc genhtml_legend=1 00:21:05.263 --rc geninfo_all_blocks=1 00:21:05.263 --rc geninfo_unexecuted_blocks=1 00:21:05.263 00:21:05.263 ' 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:05.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.263 --rc genhtml_branch_coverage=1 00:21:05.263 --rc genhtml_function_coverage=1 00:21:05.263 --rc genhtml_legend=1 00:21:05.263 --rc geninfo_all_blocks=1 00:21:05.263 --rc geninfo_unexecuted_blocks=1 00:21:05.263 00:21:05.263 ' 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:05.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.263 --rc genhtml_branch_coverage=1 00:21:05.263 --rc genhtml_function_coverage=1 00:21:05.263 --rc genhtml_legend=1 00:21:05.263 --rc geninfo_all_blocks=1 00:21:05.263 --rc geninfo_unexecuted_blocks=1 00:21:05.263 00:21:05.263 ' 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:05.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.263 --rc genhtml_branch_coverage=1 00:21:05.263 --rc genhtml_function_coverage=1 00:21:05.263 --rc genhtml_legend=1 00:21:05.263 --rc geninfo_all_blocks=1 00:21:05.263 --rc geninfo_unexecuted_blocks=1 00:21:05.263 00:21:05.263 ' 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.263 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:05.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:05.264 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.403 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:13.404 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:13.404 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:13.404 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:13.404 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.404 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:13.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:21:13.404 00:21:13.404 --- 10.0.0.2 ping statistics --- 00:21:13.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.404 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:21:13.404 00:21:13.404 --- 10.0.0.1 ping statistics --- 00:21:13.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.404 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=1686860 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 1686860 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 1686860 ']' 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.404 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:13.404 [2024-11-04 12:25:47.133004] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:21:13.404 [2024-11-04 12:25:47.133075] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.404 [2024-11-04 12:25:47.208785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.404 [2024-11-04 12:25:47.250912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.405 [2024-11-04 12:25:47.250956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.405 [2024-11-04 12:25:47.250966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.405 [2024-11-04 12:25:47.250975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.405 [2024-11-04 12:25:47.250981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.405 [2024-11-04 12:25:47.251622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.405 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:13.405 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:21:13.405 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:13.405 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:13.405 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:13.665 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.665 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:13.665 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:13.665 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:13.665 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.665 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:13.665 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.665 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:13.665 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.665 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:13.665 Malloc0 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:13.665 [2024-11-04 12:25:48.078651] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:13.665 [2024-11-04 12:25:48.114877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.665 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:13.665 [2024-11-04 12:25:48.172839] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:15.576 Initializing NVMe Controllers 00:21:15.576 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:15.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:15.576 Initialization complete. Launching workers. 00:21:15.576 ======================================================== 00:21:15.576 Latency(us) 00:21:15.576 Device Information : IOPS MiB/s Average min max 00:21:15.576 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32295.50 8001.42 63858.23 00:21:15.576 ======================================================== 00:21:15.576 Total : 129.00 16.12 32295.50 8001.42 63858.23 00:21:15.576 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:15.577 rmmod nvme_tcp 00:21:15.577 rmmod nvme_fabrics 00:21:15.577 rmmod nvme_keyring 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 1686860 ']' 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 1686860 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 1686860 ']' 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 1686860 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1686860 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1686860' 00:21:15.577 killing process with pid 1686860 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 1686860 00:21:15.577 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 1686860 00:21:15.577 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:15.577 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:15.577 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:15.577 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:15.577 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:21:15.577 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:15.577 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:21:15.577 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:15.577 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:15.577 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.577 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.577 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.120 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:18.120 00:21:18.120 real 0m12.682s 00:21:18.120 user 0m5.169s 00:21:18.120 sys 0m6.045s 00:21:18.120 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:18.120 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:18.120 ************************************ 00:21:18.120 END TEST nvmf_wait_for_buf 00:21:18.120 ************************************ 00:21:18.120 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:18.120 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:18.120 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:18.120 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:18.120 12:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:18.120 12:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:24.709 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:24.709 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:24.709 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:24.709 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:24.709 ************************************ 00:21:24.709 START TEST nvmf_perf_adq 00:21:24.709 ************************************ 00:21:24.709 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:24.709 * Looking for test storage... 00:21:24.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:24.709 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:24.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.710 --rc genhtml_branch_coverage=1 00:21:24.710 --rc genhtml_function_coverage=1 00:21:24.710 --rc genhtml_legend=1 00:21:24.710 --rc geninfo_all_blocks=1 00:21:24.710 --rc geninfo_unexecuted_blocks=1 00:21:24.710 00:21:24.710 ' 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:24.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.710 --rc genhtml_branch_coverage=1 00:21:24.710 --rc genhtml_function_coverage=1 00:21:24.710 --rc genhtml_legend=1 00:21:24.710 --rc geninfo_all_blocks=1 00:21:24.710 --rc geninfo_unexecuted_blocks=1 00:21:24.710 00:21:24.710 ' 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:24.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.710 --rc genhtml_branch_coverage=1 00:21:24.710 --rc genhtml_function_coverage=1 00:21:24.710 --rc genhtml_legend=1 00:21:24.710 --rc geninfo_all_blocks=1 00:21:24.710 --rc geninfo_unexecuted_blocks=1 00:21:24.710 00:21:24.710 ' 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:24.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.710 --rc genhtml_branch_coverage=1 00:21:24.710 --rc genhtml_function_coverage=1 00:21:24.710 --rc genhtml_legend=1 00:21:24.710 --rc geninfo_all_blocks=1 00:21:24.710 --rc geninfo_unexecuted_blocks=1 00:21:24.710 00:21:24.710 ' 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:24.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:24.710 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:31.302 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:31.302 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:31.302 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:31.302 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:31.303 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.303 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:31.303 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:31.303 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.303 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:31.303 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:31.303 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:31.303 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:31.303 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:31.303 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:31.303 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:32.687 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:34.598 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:39.887 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:39.887 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:39.887 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:39.887 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:39.887 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:39.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:21:39.888 00:21:39.888 --- 10.0.0.2 ping statistics --- 00:21:39.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.888 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:21:39.888 00:21:39.888 --- 10.0.0.1 ping statistics --- 00:21:39.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.888 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1697011 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1697011 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1697011 ']' 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.888 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:40.149 [2024-11-04 12:26:14.471713] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:21:40.149 [2024-11-04 12:26:14.471779] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.149 [2024-11-04 12:26:14.543248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:40.149 [2024-11-04 12:26:14.587500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.149 [2024-11-04 12:26:14.587542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.149 [2024-11-04 12:26:14.587551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.149 [2024-11-04 12:26:14.587558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.149 [2024-11-04 12:26:14.587564] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.149 [2024-11-04 12:26:14.589445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.149 [2024-11-04 12:26:14.589577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.149 [2024-11-04 12:26:14.589733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.149 [2024-11-04 12:26:14.589734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.719 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:40.719 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:40.719 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:40.719 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:40.719 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.980 [2024-11-04 12:26:15.431446] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.980 Malloc1 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.980 [2024-11-04 12:26:15.504074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1697089 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:40.980 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:43.526 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:43.526 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.526 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.526 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.526 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:43.526 "tick_rate": 2400000000, 00:21:43.526 "poll_groups": [ 00:21:43.526 { 00:21:43.526 "name": "nvmf_tgt_poll_group_000", 00:21:43.526 "admin_qpairs": 1, 00:21:43.526 "io_qpairs": 1, 00:21:43.526 "current_admin_qpairs": 1, 00:21:43.526 "current_io_qpairs": 1, 00:21:43.526 "pending_bdev_io": 0, 00:21:43.526 "completed_nvme_io": 19818, 00:21:43.526 "transports": [ 00:21:43.526 { 00:21:43.526 "trtype": "TCP" 00:21:43.526 } 00:21:43.526 ] 00:21:43.526 }, 00:21:43.526 { 00:21:43.526 "name": "nvmf_tgt_poll_group_001", 00:21:43.526 "admin_qpairs": 0, 00:21:43.526 "io_qpairs": 1, 00:21:43.526 "current_admin_qpairs": 0, 00:21:43.526 "current_io_qpairs": 1, 00:21:43.526 "pending_bdev_io": 0, 00:21:43.526 "completed_nvme_io": 28249, 00:21:43.526 "transports": [ 00:21:43.526 { 00:21:43.526 "trtype": "TCP" 00:21:43.526 } 00:21:43.526 ] 00:21:43.526 }, 00:21:43.526 { 00:21:43.526 "name": "nvmf_tgt_poll_group_002", 00:21:43.526 "admin_qpairs": 0, 00:21:43.526 "io_qpairs": 1, 00:21:43.526 "current_admin_qpairs": 0, 00:21:43.526 "current_io_qpairs": 1, 00:21:43.526 "pending_bdev_io": 0, 00:21:43.526 "completed_nvme_io": 21085, 00:21:43.526 "transports": [ 00:21:43.526 { 00:21:43.526 "trtype": "TCP" 00:21:43.526 } 00:21:43.526 ] 00:21:43.526 }, 00:21:43.526 { 00:21:43.526 "name": "nvmf_tgt_poll_group_003", 00:21:43.526 "admin_qpairs": 0, 00:21:43.526 "io_qpairs": 1, 00:21:43.526 "current_admin_qpairs": 0, 00:21:43.526 "current_io_qpairs": 1, 00:21:43.526 "pending_bdev_io": 0, 00:21:43.526 "completed_nvme_io": 20195, 00:21:43.526 "transports": [ 00:21:43.526 { 00:21:43.526 "trtype": "TCP" 00:21:43.526 } 00:21:43.526 ] 00:21:43.526 } 00:21:43.526 ] 00:21:43.526 }' 00:21:43.526 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:43.526 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:43.526 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:43.526 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:43.526 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1697089 00:21:51.663 Initializing NVMe Controllers 00:21:51.663 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:51.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:51.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:51.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:51.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:51.663 Initialization complete. Launching workers. 00:21:51.663 ======================================================== 00:21:51.663 Latency(us) 00:21:51.663 Device Information : IOPS MiB/s Average min max 00:21:51.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10978.61 42.89 5841.85 1672.67 44682.32 00:21:51.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14449.48 56.44 4429.26 1180.67 10210.53 00:21:51.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14214.38 55.52 4502.28 1408.61 9931.62 00:21:51.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13150.89 51.37 4867.25 1132.68 11642.58 00:21:51.663 ======================================================== 00:21:51.663 Total : 52793.35 206.22 4851.78 1132.68 44682.32 00:21:51.663 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:51.663 rmmod nvme_tcp 00:21:51.663 rmmod nvme_fabrics 00:21:51.663 rmmod nvme_keyring 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1697011 ']' 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1697011 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1697011 ']' 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1697011 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1697011 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1697011' 00:21:51.663 killing process with pid 1697011 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1697011 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1697011 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.663 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.573 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:53.573 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:53.573 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:53.573 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:55.029 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:57.060 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:02.350 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:02.350 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:02.350 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.350 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:02.351 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:02.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:22:02.351 00:22:02.351 --- 10.0.0.2 ping statistics --- 00:22:02.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.351 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:22:02.351 00:22:02.351 --- 10.0.0.1 ping statistics --- 00:22:02.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.351 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:02.351 net.core.busy_poll = 1 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:02.351 net.core.busy_read = 1 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:02.351 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:02.613 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:02.613 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:02.613 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:02.613 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:02.613 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:02.613 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.613 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1701849 00:22:02.613 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1701849 00:22:02.613 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:02.613 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1701849 ']' 00:22:02.613 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.613 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.613 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.613 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.613 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.613 [2024-11-04 12:26:37.054447] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:22:02.613 [2024-11-04 12:26:37.054501] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.613 [2024-11-04 12:26:37.121231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:02.613 [2024-11-04 12:26:37.158170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.613 [2024-11-04 12:26:37.158205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.613 [2024-11-04 12:26:37.158213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.613 [2024-11-04 12:26:37.158220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.613 [2024-11-04 12:26:37.158225] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.613 [2024-11-04 12:26:37.159716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.613 [2024-11-04 12:26:37.159831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.613 [2024-11-04 12:26:37.160151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.613 [2024-11-04 12:26:37.160152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.554 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.554 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.555 [2024-11-04 12:26:38.025851] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.555 Malloc1 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.555 [2024-11-04 12:26:38.096115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1701934 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:03.555 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:06.100 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:06.100 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.100 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.100 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.100 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:06.100 "tick_rate": 2400000000, 00:22:06.100 "poll_groups": [ 00:22:06.100 { 00:22:06.100 "name": "nvmf_tgt_poll_group_000", 00:22:06.100 "admin_qpairs": 1, 00:22:06.100 "io_qpairs": 2, 00:22:06.100 "current_admin_qpairs": 1, 00:22:06.100 "current_io_qpairs": 2, 00:22:06.100 "pending_bdev_io": 0, 00:22:06.100 "completed_nvme_io": 29713, 00:22:06.100 "transports": [ 00:22:06.100 { 00:22:06.100 "trtype": "TCP" 00:22:06.100 } 00:22:06.100 ] 00:22:06.100 }, 00:22:06.100 { 00:22:06.100 "name": "nvmf_tgt_poll_group_001", 00:22:06.100 "admin_qpairs": 0, 00:22:06.100 "io_qpairs": 2, 00:22:06.100 "current_admin_qpairs": 0, 00:22:06.100 "current_io_qpairs": 2, 00:22:06.100 "pending_bdev_io": 0, 00:22:06.100 "completed_nvme_io": 40640, 00:22:06.100 "transports": [ 00:22:06.100 { 00:22:06.100 "trtype": "TCP" 00:22:06.100 } 00:22:06.100 ] 00:22:06.100 }, 00:22:06.100 { 00:22:06.100 "name": "nvmf_tgt_poll_group_002", 00:22:06.100 "admin_qpairs": 0, 00:22:06.100 "io_qpairs": 0, 00:22:06.100 "current_admin_qpairs": 0, 00:22:06.100 "current_io_qpairs": 0, 00:22:06.100 "pending_bdev_io": 0, 00:22:06.100 "completed_nvme_io": 0, 00:22:06.100 "transports": [ 00:22:06.100 { 00:22:06.100 "trtype": "TCP" 00:22:06.100 } 00:22:06.100 ] 00:22:06.100 }, 00:22:06.100 { 00:22:06.100 "name": "nvmf_tgt_poll_group_003", 00:22:06.100 "admin_qpairs": 0, 00:22:06.100 "io_qpairs": 0, 00:22:06.100 "current_admin_qpairs": 0, 00:22:06.100 "current_io_qpairs": 0, 00:22:06.100 "pending_bdev_io": 0, 00:22:06.100 "completed_nvme_io": 0, 00:22:06.100 "transports": [ 00:22:06.100 { 00:22:06.100 "trtype": "TCP" 00:22:06.100 } 00:22:06.100 ] 00:22:06.100 } 00:22:06.100 ] 00:22:06.100 }' 00:22:06.100 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:06.100 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:06.100 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:06.100 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:06.100 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1701934 00:22:14.242 Initializing NVMe Controllers 00:22:14.242 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:14.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:14.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:14.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:14.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:14.242 Initialization complete. Launching workers. 00:22:14.242 ======================================================== 00:22:14.242 Latency(us) 00:22:14.242 Device Information : IOPS MiB/s Average min max 00:22:14.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11102.50 43.37 5764.87 1245.94 51996.38 00:22:14.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9777.60 38.19 6546.24 1137.44 49852.46 00:22:14.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11298.80 44.14 5664.34 1228.33 49781.27 00:22:14.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8836.20 34.52 7243.33 1191.95 53787.96 00:22:14.243 ======================================================== 00:22:14.243 Total : 41015.10 160.22 6241.96 1137.44 53787.96 00:22:14.243 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:14.243 rmmod nvme_tcp 00:22:14.243 rmmod nvme_fabrics 00:22:14.243 rmmod nvme_keyring 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1701849 ']' 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1701849 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1701849 ']' 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1701849 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1701849 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1701849' 00:22:14.243 killing process with pid 1701849 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1701849 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1701849 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.243 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.550 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:17.550 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:17.550 00:22:17.550 real 0m52.681s 00:22:17.550 user 2m49.307s 00:22:17.550 sys 0m10.828s 00:22:17.550 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:17.550 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.550 ************************************ 00:22:17.550 END TEST nvmf_perf_adq 00:22:17.550 ************************************ 00:22:17.550 12:26:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:17.550 12:26:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:17.550 12:26:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:17.550 12:26:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:17.550 ************************************ 00:22:17.550 START TEST nvmf_shutdown 00:22:17.550 ************************************ 00:22:17.550 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:17.550 * Looking for test storage... 00:22:17.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:17.550 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:17.550 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:17.550 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:17.550 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:17.550 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:17.550 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:17.550 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:17.550 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.550 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:17.550 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:17.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.551 --rc genhtml_branch_coverage=1 00:22:17.551 --rc genhtml_function_coverage=1 00:22:17.551 --rc genhtml_legend=1 00:22:17.551 --rc geninfo_all_blocks=1 00:22:17.551 --rc geninfo_unexecuted_blocks=1 00:22:17.551 00:22:17.551 ' 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:17.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.551 --rc genhtml_branch_coverage=1 00:22:17.551 --rc genhtml_function_coverage=1 00:22:17.551 --rc genhtml_legend=1 00:22:17.551 --rc geninfo_all_blocks=1 00:22:17.551 --rc geninfo_unexecuted_blocks=1 00:22:17.551 00:22:17.551 ' 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:17.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.551 --rc genhtml_branch_coverage=1 00:22:17.551 --rc genhtml_function_coverage=1 00:22:17.551 --rc genhtml_legend=1 00:22:17.551 --rc geninfo_all_blocks=1 00:22:17.551 --rc geninfo_unexecuted_blocks=1 00:22:17.551 00:22:17.551 ' 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:17.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.551 --rc genhtml_branch_coverage=1 00:22:17.551 --rc genhtml_function_coverage=1 00:22:17.551 --rc genhtml_legend=1 00:22:17.551 --rc geninfo_all_blocks=1 00:22:17.551 --rc geninfo_unexecuted_blocks=1 00:22:17.551 00:22:17.551 ' 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:17.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:17.551 ************************************ 00:22:17.551 START TEST nvmf_shutdown_tc1 00:22:17.551 ************************************ 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:17.551 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:25.709 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:25.709 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:25.709 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:25.709 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.709 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:25.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:22:25.710 00:22:25.710 --- 10.0.0.2 ping statistics --- 00:22:25.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.710 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:22:25.710 00:22:25.710 --- 10.0.0.1 ping statistics --- 00:22:25.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.710 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=1708433 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 1708433 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1708433 ']' 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:25.710 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.710 [2024-11-04 12:26:59.448588] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:22:25.710 [2024-11-04 12:26:59.448657] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.710 [2024-11-04 12:26:59.537269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.710 [2024-11-04 12:26:59.590320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.710 [2024-11-04 12:26:59.590372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.710 [2024-11-04 12:26:59.590381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.710 [2024-11-04 12:26:59.590389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.710 [2024-11-04 12:26:59.590395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.710 [2024-11-04 12:26:59.592410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.710 [2024-11-04 12:26:59.592579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.710 [2024-11-04 12:26:59.592744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.710 [2024-11-04 12:26:59.592744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:25.710 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:25.710 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:25.710 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:25.710 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:25.710 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.971 [2024-11-04 12:27:00.300677] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.971 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:25.971 Malloc1 00:22:25.971 [2024-11-04 12:27:00.422473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.971 Malloc2 00:22:25.971 Malloc3 00:22:25.971 Malloc4 00:22:26.233 Malloc5 00:22:26.233 Malloc6 00:22:26.233 Malloc7 00:22:26.233 Malloc8 00:22:26.233 Malloc9 00:22:26.233 Malloc10 00:22:26.233 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.233 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:26.233 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:26.233 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.495 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1708780 00:22:26.495 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1708780 /var/tmp/bdevperf.sock 00:22:26.495 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1708780 ']' 00:22:26.495 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.495 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:26.495 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:26.495 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.495 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:26.495 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:26.495 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.495 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:26.496 { 00:22:26.496 "params": { 00:22:26.496 "name": "Nvme$subsystem", 00:22:26.496 "trtype": "$TEST_TRANSPORT", 00:22:26.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:26.496 "adrfam": "ipv4", 00:22:26.496 "trsvcid": "$NVMF_PORT", 00:22:26.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:26.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:26.496 "hdgst": ${hdgst:-false}, 00:22:26.496 "ddgst": ${ddgst:-false} 00:22:26.496 }, 00:22:26.496 "method": "bdev_nvme_attach_controller" 00:22:26.496 } 00:22:26.496 EOF 00:22:26.496 )") 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:26.496 { 00:22:26.496 "params": { 00:22:26.496 "name": "Nvme$subsystem", 00:22:26.496 "trtype": "$TEST_TRANSPORT", 00:22:26.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:26.496 "adrfam": "ipv4", 00:22:26.496 "trsvcid": "$NVMF_PORT", 00:22:26.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:26.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:26.496 "hdgst": ${hdgst:-false}, 00:22:26.496 "ddgst": ${ddgst:-false} 00:22:26.496 }, 00:22:26.496 "method": "bdev_nvme_attach_controller" 00:22:26.496 } 00:22:26.496 EOF 00:22:26.496 )") 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:26.496 { 00:22:26.496 "params": { 00:22:26.496 "name": "Nvme$subsystem", 00:22:26.496 "trtype": "$TEST_TRANSPORT", 00:22:26.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:26.496 "adrfam": "ipv4", 00:22:26.496 "trsvcid": "$NVMF_PORT", 00:22:26.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:26.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:26.496 "hdgst": ${hdgst:-false}, 00:22:26.496 "ddgst": ${ddgst:-false} 00:22:26.496 }, 00:22:26.496 "method": "bdev_nvme_attach_controller" 00:22:26.496 } 00:22:26.496 EOF 00:22:26.496 )") 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:26.496 { 00:22:26.496 "params": { 00:22:26.496 "name": "Nvme$subsystem", 00:22:26.496 "trtype": "$TEST_TRANSPORT", 00:22:26.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:26.496 "adrfam": "ipv4", 00:22:26.496 "trsvcid": "$NVMF_PORT", 00:22:26.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:26.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:26.496 "hdgst": ${hdgst:-false}, 00:22:26.496 "ddgst": ${ddgst:-false} 00:22:26.496 }, 00:22:26.496 "method": "bdev_nvme_attach_controller" 00:22:26.496 } 00:22:26.496 EOF 00:22:26.496 )") 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:26.496 { 00:22:26.496 "params": { 00:22:26.496 "name": "Nvme$subsystem", 00:22:26.496 "trtype": "$TEST_TRANSPORT", 00:22:26.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:26.496 "adrfam": "ipv4", 00:22:26.496 "trsvcid": "$NVMF_PORT", 00:22:26.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:26.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:26.496 "hdgst": ${hdgst:-false}, 00:22:26.496 "ddgst": ${ddgst:-false} 00:22:26.496 }, 00:22:26.496 "method": "bdev_nvme_attach_controller" 00:22:26.496 } 00:22:26.496 EOF 00:22:26.496 )") 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:26.496 { 00:22:26.496 "params": { 00:22:26.496 "name": "Nvme$subsystem", 00:22:26.496 "trtype": "$TEST_TRANSPORT", 00:22:26.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:26.496 "adrfam": "ipv4", 00:22:26.496 "trsvcid": "$NVMF_PORT", 00:22:26.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:26.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:26.496 "hdgst": ${hdgst:-false}, 00:22:26.496 "ddgst": ${ddgst:-false} 00:22:26.496 }, 00:22:26.496 "method": "bdev_nvme_attach_controller" 00:22:26.496 } 00:22:26.496 EOF 00:22:26.496 )") 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:26.496 [2024-11-04 12:27:00.883818] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:22:26.496 [2024-11-04 12:27:00.883872] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:26.496 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:26.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:26.497 { 00:22:26.497 "params": { 00:22:26.497 "name": "Nvme$subsystem", 00:22:26.497 "trtype": "$TEST_TRANSPORT", 00:22:26.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:26.497 "adrfam": "ipv4", 00:22:26.497 "trsvcid": "$NVMF_PORT", 00:22:26.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:26.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:26.497 "hdgst": ${hdgst:-false}, 00:22:26.497 "ddgst": ${ddgst:-false} 00:22:26.497 }, 00:22:26.497 "method": "bdev_nvme_attach_controller" 00:22:26.497 } 00:22:26.497 EOF 00:22:26.497 )") 00:22:26.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:26.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:26.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:26.497 { 00:22:26.497 "params": { 00:22:26.497 "name": "Nvme$subsystem", 00:22:26.497 "trtype": "$TEST_TRANSPORT", 00:22:26.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:26.497 "adrfam": "ipv4", 00:22:26.497 "trsvcid": "$NVMF_PORT", 00:22:26.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:26.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:26.497 "hdgst": ${hdgst:-false}, 00:22:26.497 "ddgst": ${ddgst:-false} 00:22:26.497 }, 00:22:26.497 "method": "bdev_nvme_attach_controller" 00:22:26.497 } 00:22:26.497 EOF 00:22:26.497 )") 00:22:26.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:26.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:26.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:26.497 { 00:22:26.497 "params": { 00:22:26.497 "name": "Nvme$subsystem", 00:22:26.497 "trtype": "$TEST_TRANSPORT", 00:22:26.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:26.497 "adrfam": "ipv4", 00:22:26.497 "trsvcid": "$NVMF_PORT", 00:22:26.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:26.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:26.497 "hdgst": ${hdgst:-false}, 00:22:26.497 "ddgst": ${ddgst:-false} 00:22:26.497 }, 00:22:26.497 "method": "bdev_nvme_attach_controller" 00:22:26.497 } 00:22:26.497 EOF 00:22:26.497 )") 00:22:26.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:26.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:26.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:26.497 { 00:22:26.497 "params": { 00:22:26.497 "name": "Nvme$subsystem", 00:22:26.497 "trtype": "$TEST_TRANSPORT", 00:22:26.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:26.497 "adrfam": "ipv4", 00:22:26.497 "trsvcid": "$NVMF_PORT", 00:22:26.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:26.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:26.497 "hdgst": ${hdgst:-false}, 00:22:26.497 "ddgst": ${ddgst:-false} 00:22:26.497 }, 00:22:26.497 "method": "bdev_nvme_attach_controller" 00:22:26.497 } 00:22:26.497 EOF 00:22:26.497 )") 00:22:26.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:26.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:26.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:26.497 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:26.497 "params": { 00:22:26.497 "name": "Nvme1", 00:22:26.497 "trtype": "tcp", 00:22:26.497 "traddr": "10.0.0.2", 00:22:26.497 "adrfam": "ipv4", 00:22:26.497 "trsvcid": "4420", 00:22:26.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:26.497 "hdgst": false, 00:22:26.497 "ddgst": false 00:22:26.497 }, 00:22:26.497 "method": "bdev_nvme_attach_controller" 00:22:26.497 },{ 00:22:26.497 "params": { 00:22:26.497 "name": "Nvme2", 00:22:26.497 "trtype": "tcp", 00:22:26.497 "traddr": "10.0.0.2", 00:22:26.497 "adrfam": "ipv4", 00:22:26.497 "trsvcid": "4420", 00:22:26.497 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:26.497 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:26.497 "hdgst": false, 00:22:26.497 "ddgst": false 00:22:26.497 }, 00:22:26.497 "method": "bdev_nvme_attach_controller" 00:22:26.497 },{ 00:22:26.497 "params": { 00:22:26.497 "name": "Nvme3", 00:22:26.497 "trtype": "tcp", 00:22:26.497 "traddr": "10.0.0.2", 00:22:26.497 "adrfam": "ipv4", 00:22:26.497 "trsvcid": "4420", 00:22:26.497 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:26.497 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:26.497 "hdgst": false, 00:22:26.497 "ddgst": false 00:22:26.497 }, 00:22:26.497 "method": "bdev_nvme_attach_controller" 00:22:26.497 },{ 00:22:26.497 "params": { 00:22:26.497 "name": "Nvme4", 00:22:26.497 "trtype": "tcp", 00:22:26.497 "traddr": "10.0.0.2", 00:22:26.497 "adrfam": "ipv4", 00:22:26.497 "trsvcid": "4420", 00:22:26.497 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:26.497 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:26.497 "hdgst": false, 00:22:26.497 "ddgst": false 00:22:26.497 }, 00:22:26.498 "method": "bdev_nvme_attach_controller" 00:22:26.498 },{ 00:22:26.498 "params": { 00:22:26.498 "name": "Nvme5", 00:22:26.498 "trtype": "tcp", 00:22:26.498 "traddr": "10.0.0.2", 00:22:26.498 "adrfam": "ipv4", 00:22:26.498 "trsvcid": "4420", 00:22:26.498 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:26.498 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:26.498 "hdgst": false, 00:22:26.498 "ddgst": false 00:22:26.498 }, 00:22:26.498 "method": "bdev_nvme_attach_controller" 00:22:26.498 },{ 00:22:26.498 "params": { 00:22:26.498 "name": "Nvme6", 00:22:26.498 "trtype": "tcp", 00:22:26.498 "traddr": "10.0.0.2", 00:22:26.498 "adrfam": "ipv4", 00:22:26.498 "trsvcid": "4420", 00:22:26.498 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:26.498 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:26.498 "hdgst": false, 00:22:26.498 "ddgst": false 00:22:26.498 }, 00:22:26.498 "method": "bdev_nvme_attach_controller" 00:22:26.498 },{ 00:22:26.498 "params": { 00:22:26.498 "name": "Nvme7", 00:22:26.498 "trtype": "tcp", 00:22:26.498 "traddr": "10.0.0.2", 00:22:26.498 "adrfam": "ipv4", 00:22:26.498 "trsvcid": "4420", 00:22:26.498 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:26.498 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:26.498 "hdgst": false, 00:22:26.498 "ddgst": false 00:22:26.498 }, 00:22:26.498 "method": "bdev_nvme_attach_controller" 00:22:26.498 },{ 00:22:26.498 "params": { 00:22:26.498 "name": "Nvme8", 00:22:26.498 "trtype": "tcp", 00:22:26.498 "traddr": "10.0.0.2", 00:22:26.498 "adrfam": "ipv4", 00:22:26.498 "trsvcid": "4420", 00:22:26.498 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:26.498 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:26.498 "hdgst": false, 00:22:26.498 "ddgst": false 00:22:26.498 }, 00:22:26.498 "method": "bdev_nvme_attach_controller" 00:22:26.498 },{ 00:22:26.498 "params": { 00:22:26.498 "name": "Nvme9", 00:22:26.498 "trtype": "tcp", 00:22:26.498 "traddr": "10.0.0.2", 00:22:26.498 "adrfam": "ipv4", 00:22:26.498 "trsvcid": "4420", 00:22:26.498 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:26.498 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:26.498 "hdgst": false, 00:22:26.498 "ddgst": false 00:22:26.498 }, 00:22:26.498 "method": "bdev_nvme_attach_controller" 00:22:26.498 },{ 00:22:26.498 "params": { 00:22:26.498 "name": "Nvme10", 00:22:26.498 "trtype": "tcp", 00:22:26.498 "traddr": "10.0.0.2", 00:22:26.498 "adrfam": "ipv4", 00:22:26.498 "trsvcid": "4420", 00:22:26.498 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:26.498 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:26.498 "hdgst": false, 00:22:26.498 "ddgst": false 00:22:26.498 }, 00:22:26.498 "method": "bdev_nvme_attach_controller" 00:22:26.498 }' 00:22:26.498 [2024-11-04 12:27:00.946272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.498 [2024-11-04 12:27:00.982924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.880 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.880 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:27.880 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:27.880 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.880 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:27.880 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.880 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1708780 00:22:27.880 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:27.880 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:28.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1708780 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1708433 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:28.821 { 00:22:28.821 "params": { 00:22:28.821 "name": "Nvme$subsystem", 00:22:28.821 "trtype": "$TEST_TRANSPORT", 00:22:28.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.821 "adrfam": "ipv4", 00:22:28.821 "trsvcid": "$NVMF_PORT", 00:22:28.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.821 "hdgst": ${hdgst:-false}, 00:22:28.821 "ddgst": ${ddgst:-false} 00:22:28.821 }, 00:22:28.821 "method": "bdev_nvme_attach_controller" 00:22:28.821 } 00:22:28.821 EOF 00:22:28.821 )") 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:28.821 { 00:22:28.821 "params": { 00:22:28.821 "name": "Nvme$subsystem", 00:22:28.821 "trtype": "$TEST_TRANSPORT", 00:22:28.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.821 "adrfam": "ipv4", 00:22:28.821 "trsvcid": "$NVMF_PORT", 00:22:28.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.821 "hdgst": ${hdgst:-false}, 00:22:28.821 "ddgst": ${ddgst:-false} 00:22:28.821 }, 00:22:28.821 "method": "bdev_nvme_attach_controller" 00:22:28.821 } 00:22:28.821 EOF 00:22:28.821 )") 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:28.821 { 00:22:28.821 "params": { 00:22:28.821 "name": "Nvme$subsystem", 00:22:28.821 "trtype": "$TEST_TRANSPORT", 00:22:28.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.821 "adrfam": "ipv4", 00:22:28.821 "trsvcid": "$NVMF_PORT", 00:22:28.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.821 "hdgst": ${hdgst:-false}, 00:22:28.821 "ddgst": ${ddgst:-false} 00:22:28.821 }, 00:22:28.821 "method": "bdev_nvme_attach_controller" 00:22:28.821 } 00:22:28.821 EOF 00:22:28.821 )") 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:28.821 { 00:22:28.821 "params": { 00:22:28.821 "name": "Nvme$subsystem", 00:22:28.821 "trtype": "$TEST_TRANSPORT", 00:22:28.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.821 "adrfam": "ipv4", 00:22:28.821 "trsvcid": "$NVMF_PORT", 00:22:28.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.821 "hdgst": ${hdgst:-false}, 00:22:28.821 "ddgst": ${ddgst:-false} 00:22:28.821 }, 00:22:28.821 "method": "bdev_nvme_attach_controller" 00:22:28.821 } 00:22:28.821 EOF 00:22:28.821 )") 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:28.821 { 00:22:28.821 "params": { 00:22:28.821 "name": "Nvme$subsystem", 00:22:28.821 "trtype": "$TEST_TRANSPORT", 00:22:28.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.821 "adrfam": "ipv4", 00:22:28.821 "trsvcid": "$NVMF_PORT", 00:22:28.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.821 "hdgst": ${hdgst:-false}, 00:22:28.821 "ddgst": ${ddgst:-false} 00:22:28.821 }, 00:22:28.821 "method": "bdev_nvme_attach_controller" 00:22:28.821 } 00:22:28.821 EOF 00:22:28.821 )") 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:28.821 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:28.821 { 00:22:28.821 "params": { 00:22:28.821 "name": "Nvme$subsystem", 00:22:28.821 "trtype": "$TEST_TRANSPORT", 00:22:28.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.821 "adrfam": "ipv4", 00:22:28.821 "trsvcid": "$NVMF_PORT", 00:22:28.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.821 "hdgst": ${hdgst:-false}, 00:22:28.821 "ddgst": ${ddgst:-false} 00:22:28.821 }, 00:22:28.821 "method": "bdev_nvme_attach_controller" 00:22:28.821 } 00:22:28.822 EOF 00:22:28.822 )") 00:22:28.822 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:28.822 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:28.822 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:28.822 { 00:22:28.822 "params": { 00:22:28.822 "name": "Nvme$subsystem", 00:22:28.822 "trtype": "$TEST_TRANSPORT", 00:22:28.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.822 "adrfam": "ipv4", 00:22:28.822 "trsvcid": "$NVMF_PORT", 00:22:28.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.822 "hdgst": ${hdgst:-false}, 00:22:28.822 "ddgst": ${ddgst:-false} 00:22:28.822 }, 00:22:28.822 "method": "bdev_nvme_attach_controller" 00:22:28.822 } 00:22:28.822 EOF 00:22:28.822 )") 00:22:28.822 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:29.081 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:29.081 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:29.081 { 00:22:29.081 "params": { 00:22:29.081 "name": "Nvme$subsystem", 00:22:29.081 "trtype": "$TEST_TRANSPORT", 00:22:29.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.081 "adrfam": "ipv4", 00:22:29.081 "trsvcid": "$NVMF_PORT", 00:22:29.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.081 "hdgst": ${hdgst:-false}, 00:22:29.081 "ddgst": ${ddgst:-false} 00:22:29.081 }, 00:22:29.081 "method": "bdev_nvme_attach_controller" 00:22:29.081 } 00:22:29.081 EOF 00:22:29.081 )") 00:22:29.082 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:29.082 [2024-11-04 12:27:03.397612] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:22:29.082 [2024-11-04 12:27:03.397678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709485 ] 00:22:29.082 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:29.082 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:29.082 { 00:22:29.082 "params": { 00:22:29.082 "name": "Nvme$subsystem", 00:22:29.082 "trtype": "$TEST_TRANSPORT", 00:22:29.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.082 "adrfam": "ipv4", 00:22:29.082 "trsvcid": "$NVMF_PORT", 00:22:29.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.082 "hdgst": ${hdgst:-false}, 00:22:29.082 "ddgst": ${ddgst:-false} 00:22:29.082 }, 00:22:29.082 "method": "bdev_nvme_attach_controller" 00:22:29.082 } 00:22:29.082 EOF 00:22:29.082 )") 00:22:29.082 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:29.082 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:29.082 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:29.082 { 00:22:29.082 "params": { 00:22:29.082 "name": "Nvme$subsystem", 00:22:29.082 "trtype": "$TEST_TRANSPORT", 00:22:29.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.082 "adrfam": "ipv4", 00:22:29.082 "trsvcid": "$NVMF_PORT", 00:22:29.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.082 "hdgst": ${hdgst:-false}, 00:22:29.082 "ddgst": ${ddgst:-false} 00:22:29.082 }, 00:22:29.082 "method": "bdev_nvme_attach_controller" 00:22:29.082 } 00:22:29.082 EOF 00:22:29.082 )") 00:22:29.082 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:29.082 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:29.082 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:29.082 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:29.082 "params": { 00:22:29.082 "name": "Nvme1", 00:22:29.082 "trtype": "tcp", 00:22:29.082 "traddr": "10.0.0.2", 00:22:29.082 "adrfam": "ipv4", 00:22:29.082 "trsvcid": "4420", 00:22:29.082 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.082 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:29.082 "hdgst": false, 00:22:29.082 "ddgst": false 00:22:29.082 }, 00:22:29.082 "method": "bdev_nvme_attach_controller" 00:22:29.082 },{ 00:22:29.082 "params": { 00:22:29.082 "name": "Nvme2", 00:22:29.082 "trtype": "tcp", 00:22:29.082 "traddr": "10.0.0.2", 00:22:29.082 "adrfam": "ipv4", 00:22:29.082 "trsvcid": "4420", 00:22:29.082 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:29.082 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:29.082 "hdgst": false, 00:22:29.082 "ddgst": false 00:22:29.082 }, 00:22:29.082 "method": "bdev_nvme_attach_controller" 00:22:29.082 },{ 00:22:29.082 "params": { 00:22:29.082 "name": "Nvme3", 00:22:29.082 "trtype": "tcp", 00:22:29.082 "traddr": "10.0.0.2", 00:22:29.082 "adrfam": "ipv4", 00:22:29.082 "trsvcid": "4420", 00:22:29.082 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:29.082 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:29.082 "hdgst": false, 00:22:29.082 "ddgst": false 00:22:29.082 }, 00:22:29.082 "method": "bdev_nvme_attach_controller" 00:22:29.082 },{ 00:22:29.082 "params": { 00:22:29.082 "name": "Nvme4", 00:22:29.082 "trtype": "tcp", 00:22:29.082 "traddr": "10.0.0.2", 00:22:29.082 "adrfam": "ipv4", 00:22:29.082 "trsvcid": "4420", 00:22:29.082 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:29.082 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:29.082 "hdgst": false, 00:22:29.082 "ddgst": false 00:22:29.082 }, 00:22:29.082 "method": "bdev_nvme_attach_controller" 00:22:29.082 },{ 00:22:29.082 "params": { 00:22:29.082 "name": "Nvme5", 00:22:29.082 "trtype": "tcp", 00:22:29.082 "traddr": "10.0.0.2", 00:22:29.082 "adrfam": "ipv4", 00:22:29.082 "trsvcid": "4420", 00:22:29.082 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:29.082 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:29.082 "hdgst": false, 00:22:29.082 "ddgst": false 00:22:29.082 }, 00:22:29.082 "method": "bdev_nvme_attach_controller" 00:22:29.082 },{ 00:22:29.082 "params": { 00:22:29.082 "name": "Nvme6", 00:22:29.082 "trtype": "tcp", 00:22:29.082 "traddr": "10.0.0.2", 00:22:29.082 "adrfam": "ipv4", 00:22:29.082 "trsvcid": "4420", 00:22:29.082 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:29.082 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:29.082 "hdgst": false, 00:22:29.082 "ddgst": false 00:22:29.082 }, 00:22:29.082 "method": "bdev_nvme_attach_controller" 00:22:29.082 },{ 00:22:29.082 "params": { 00:22:29.082 "name": "Nvme7", 00:22:29.082 "trtype": "tcp", 00:22:29.082 "traddr": "10.0.0.2", 00:22:29.082 "adrfam": "ipv4", 00:22:29.082 "trsvcid": "4420", 00:22:29.082 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:29.082 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:29.082 "hdgst": false, 00:22:29.082 "ddgst": false 00:22:29.082 }, 00:22:29.082 "method": "bdev_nvme_attach_controller" 00:22:29.082 },{ 00:22:29.082 "params": { 00:22:29.082 "name": "Nvme8", 00:22:29.082 "trtype": "tcp", 00:22:29.082 "traddr": "10.0.0.2", 00:22:29.082 "adrfam": "ipv4", 00:22:29.082 "trsvcid": "4420", 00:22:29.082 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:29.082 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:29.082 "hdgst": false, 00:22:29.082 "ddgst": false 00:22:29.082 }, 00:22:29.082 "method": "bdev_nvme_attach_controller" 00:22:29.082 },{ 00:22:29.082 "params": { 00:22:29.082 "name": "Nvme9", 00:22:29.082 "trtype": "tcp", 00:22:29.082 "traddr": "10.0.0.2", 00:22:29.082 "adrfam": "ipv4", 00:22:29.082 "trsvcid": "4420", 00:22:29.082 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:29.082 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:29.082 "hdgst": false, 00:22:29.082 "ddgst": false 00:22:29.082 }, 00:22:29.082 "method": "bdev_nvme_attach_controller" 00:22:29.082 },{ 00:22:29.082 "params": { 00:22:29.082 "name": "Nvme10", 00:22:29.082 "trtype": "tcp", 00:22:29.082 "traddr": "10.0.0.2", 00:22:29.082 "adrfam": "ipv4", 00:22:29.082 "trsvcid": "4420", 00:22:29.082 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:29.082 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:29.082 "hdgst": false, 00:22:29.082 "ddgst": false 00:22:29.082 }, 00:22:29.082 "method": "bdev_nvme_attach_controller" 00:22:29.082 }' 00:22:29.082 [2024-11-04 12:27:03.459132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.082 [2024-11-04 12:27:03.495351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.463 Running I/O for 1 seconds... 00:22:31.663 1869.00 IOPS, 116.81 MiB/s 00:22:31.663 Latency(us) 00:22:31.663 [2024-11-04T11:27:06.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.663 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.663 Verification LBA range: start 0x0 length 0x400 00:22:31.663 Nvme1n1 : 1.15 223.25 13.95 0.00 0.00 283586.99 16711.68 249910.61 00:22:31.663 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.663 Verification LBA range: start 0x0 length 0x400 00:22:31.663 Nvme2n1 : 1.14 228.28 14.27 0.00 0.00 272162.29 4505.60 248162.99 00:22:31.663 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.663 Verification LBA range: start 0x0 length 0x400 00:22:31.663 Nvme3n1 : 1.10 233.12 14.57 0.00 0.00 262328.32 18568.53 276125.01 00:22:31.663 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.663 Verification LBA range: start 0x0 length 0x400 00:22:31.663 Nvme4n1 : 1.11 231.25 14.45 0.00 0.00 259131.31 19223.89 244667.73 00:22:31.663 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.663 Verification LBA range: start 0x0 length 0x400 00:22:31.663 Nvme5n1 : 1.15 223.07 13.94 0.00 0.00 264935.89 17367.04 253405.87 00:22:31.663 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.663 Verification LBA range: start 0x0 length 0x400 00:22:31.663 Nvme6n1 : 1.18 270.59 16.91 0.00 0.00 214264.66 6471.68 241172.48 00:22:31.663 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.663 Verification LBA range: start 0x0 length 0x400 00:22:31.664 Nvme7n1 : 1.18 271.85 16.99 0.00 0.00 209653.76 18131.63 244667.73 00:22:31.664 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.664 Verification LBA range: start 0x0 length 0x400 00:22:31.664 Nvme8n1 : 1.19 269.04 16.82 0.00 0.00 208980.57 11250.35 234181.97 00:22:31.664 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.664 Verification LBA range: start 0x0 length 0x400 00:22:31.664 Nvme9n1 : 1.17 218.25 13.64 0.00 0.00 252640.21 21189.97 265639.25 00:22:31.664 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.664 Verification LBA range: start 0x0 length 0x400 00:22:31.664 Nvme10n1 : 1.20 267.20 16.70 0.00 0.00 203119.96 10977.28 265639.25 00:22:31.664 [2024-11-04T11:27:06.234Z] =================================================================================================================== 00:22:31.664 [2024-11-04T11:27:06.234Z] Total : 2435.89 152.24 0.00 0.00 240028.25 4505.60 276125.01 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:31.924 rmmod nvme_tcp 00:22:31.924 rmmod nvme_fabrics 00:22:31.924 rmmod nvme_keyring 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 1708433 ']' 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 1708433 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1708433 ']' 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1708433 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:22:31.924 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:31.925 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1708433 00:22:31.925 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:31.925 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:31.925 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1708433' 00:22:31.925 killing process with pid 1708433 00:22:31.925 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1708433 00:22:31.925 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1708433 00:22:32.186 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:32.186 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:32.186 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:32.186 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:32.186 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:22:32.186 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:32.186 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:22:32.186 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:32.186 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:32.186 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.186 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.186 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.732 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:34.732 00:22:34.732 real 0m16.761s 00:22:34.732 user 0m34.386s 00:22:34.732 sys 0m6.715s 00:22:34.732 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:34.732 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.732 ************************************ 00:22:34.732 END TEST nvmf_shutdown_tc1 00:22:34.732 ************************************ 00:22:34.732 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:34.732 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:34.732 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:34.732 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:34.732 ************************************ 00:22:34.732 START TEST nvmf_shutdown_tc2 00:22:34.732 ************************************ 00:22:34.732 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:22:34.732 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:34.732 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:34.733 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:34.733 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:34.733 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:34.733 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:34.733 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:34.734 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:34.734 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:34.734 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.734 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.734 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.734 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:34.734 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:34.734 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:34.734 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:34.734 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:34.734 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:34.734 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.734 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:34.734 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:34.734 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:34.734 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:34.734 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:34.734 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:34.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:22:34.734 00:22:34.734 --- 10.0.0.2 ping statistics --- 00:22:34.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.734 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:34.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:22:34.734 00:22:34.734 --- 10.0.0.1 ping statistics --- 00:22:34.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.734 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1710715 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1710715 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1710715 ']' 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:34.734 12:27:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:34.734 [2024-11-04 12:27:09.256923] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:22:34.734 [2024-11-04 12:27:09.256995] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.996 [2024-11-04 12:27:09.345094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:34.996 [2024-11-04 12:27:09.380491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.996 [2024-11-04 12:27:09.380524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.996 [2024-11-04 12:27:09.380530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.996 [2024-11-04 12:27:09.380535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.996 [2024-11-04 12:27:09.380539] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.996 [2024-11-04 12:27:09.381874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.996 [2024-11-04 12:27:09.382052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:34.996 [2024-11-04 12:27:09.382179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.996 [2024-11-04 12:27:09.382181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.581 [2024-11-04 12:27:10.112786] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.581 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.842 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.842 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.842 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.842 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.842 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.842 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.842 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.842 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.842 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.842 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.842 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:35.842 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:35.842 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.842 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.842 Malloc1 00:22:35.842 [2024-11-04 12:27:10.230785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.842 Malloc2 00:22:35.842 Malloc3 00:22:35.842 Malloc4 00:22:35.842 Malloc5 00:22:35.842 Malloc6 00:22:36.103 Malloc7 00:22:36.103 Malloc8 00:22:36.103 Malloc9 00:22:36.103 Malloc10 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1711384 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1711384 /var/tmp/bdevperf.sock 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1711384 ']' 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.104 { 00:22:36.104 "params": { 00:22:36.104 "name": "Nvme$subsystem", 00:22:36.104 "trtype": "$TEST_TRANSPORT", 00:22:36.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.104 "adrfam": "ipv4", 00:22:36.104 "trsvcid": "$NVMF_PORT", 00:22:36.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.104 "hdgst": ${hdgst:-false}, 00:22:36.104 "ddgst": ${ddgst:-false} 00:22:36.104 }, 00:22:36.104 "method": "bdev_nvme_attach_controller" 00:22:36.104 } 00:22:36.104 EOF 00:22:36.104 )") 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.104 { 00:22:36.104 "params": { 00:22:36.104 "name": "Nvme$subsystem", 00:22:36.104 "trtype": "$TEST_TRANSPORT", 00:22:36.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.104 "adrfam": "ipv4", 00:22:36.104 "trsvcid": "$NVMF_PORT", 00:22:36.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.104 "hdgst": ${hdgst:-false}, 00:22:36.104 "ddgst": ${ddgst:-false} 00:22:36.104 }, 00:22:36.104 "method": "bdev_nvme_attach_controller" 00:22:36.104 } 00:22:36.104 EOF 00:22:36.104 )") 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.104 { 00:22:36.104 "params": { 00:22:36.104 "name": "Nvme$subsystem", 00:22:36.104 "trtype": "$TEST_TRANSPORT", 00:22:36.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.104 "adrfam": "ipv4", 00:22:36.104 "trsvcid": "$NVMF_PORT", 00:22:36.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.104 "hdgst": ${hdgst:-false}, 00:22:36.104 "ddgst": ${ddgst:-false} 00:22:36.104 }, 00:22:36.104 "method": "bdev_nvme_attach_controller" 00:22:36.104 } 00:22:36.104 EOF 00:22:36.104 )") 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.104 { 00:22:36.104 "params": { 00:22:36.104 "name": "Nvme$subsystem", 00:22:36.104 "trtype": "$TEST_TRANSPORT", 00:22:36.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.104 "adrfam": "ipv4", 00:22:36.104 "trsvcid": "$NVMF_PORT", 00:22:36.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.104 "hdgst": ${hdgst:-false}, 00:22:36.104 "ddgst": ${ddgst:-false} 00:22:36.104 }, 00:22:36.104 "method": "bdev_nvme_attach_controller" 00:22:36.104 } 00:22:36.104 EOF 00:22:36.104 )") 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.104 { 00:22:36.104 "params": { 00:22:36.104 "name": "Nvme$subsystem", 00:22:36.104 "trtype": "$TEST_TRANSPORT", 00:22:36.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.104 "adrfam": "ipv4", 00:22:36.104 "trsvcid": "$NVMF_PORT", 00:22:36.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.104 "hdgst": ${hdgst:-false}, 00:22:36.104 "ddgst": ${ddgst:-false} 00:22:36.104 }, 00:22:36.104 "method": "bdev_nvme_attach_controller" 00:22:36.104 } 00:22:36.104 EOF 00:22:36.104 )") 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.104 { 00:22:36.104 "params": { 00:22:36.104 "name": "Nvme$subsystem", 00:22:36.104 "trtype": "$TEST_TRANSPORT", 00:22:36.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.104 "adrfam": "ipv4", 00:22:36.104 "trsvcid": "$NVMF_PORT", 00:22:36.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.104 "hdgst": ${hdgst:-false}, 00:22:36.104 "ddgst": ${ddgst:-false} 00:22:36.104 }, 00:22:36.104 "method": "bdev_nvme_attach_controller" 00:22:36.104 } 00:22:36.104 EOF 00:22:36.104 )") 00:22:36.104 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:36.366 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.366 [2024-11-04 12:27:10.674942] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:22:36.366 [2024-11-04 12:27:10.674996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1711384 ] 00:22:36.366 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.366 { 00:22:36.366 "params": { 00:22:36.366 "name": "Nvme$subsystem", 00:22:36.366 "trtype": "$TEST_TRANSPORT", 00:22:36.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.366 "adrfam": "ipv4", 00:22:36.366 "trsvcid": "$NVMF_PORT", 00:22:36.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.366 "hdgst": ${hdgst:-false}, 00:22:36.367 "ddgst": ${ddgst:-false} 00:22:36.367 }, 00:22:36.367 "method": "bdev_nvme_attach_controller" 00:22:36.367 } 00:22:36.367 EOF 00:22:36.367 )") 00:22:36.367 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:36.367 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.367 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.367 { 00:22:36.367 "params": { 00:22:36.367 "name": "Nvme$subsystem", 00:22:36.367 "trtype": "$TEST_TRANSPORT", 00:22:36.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.367 "adrfam": "ipv4", 00:22:36.367 "trsvcid": "$NVMF_PORT", 00:22:36.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.367 "hdgst": ${hdgst:-false}, 00:22:36.367 "ddgst": ${ddgst:-false} 00:22:36.367 }, 00:22:36.367 "method": "bdev_nvme_attach_controller" 00:22:36.367 } 00:22:36.367 EOF 00:22:36.367 )") 00:22:36.367 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:36.367 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.367 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.367 { 00:22:36.367 "params": { 00:22:36.367 "name": "Nvme$subsystem", 00:22:36.367 "trtype": "$TEST_TRANSPORT", 00:22:36.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.367 "adrfam": "ipv4", 00:22:36.367 "trsvcid": "$NVMF_PORT", 00:22:36.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.367 "hdgst": ${hdgst:-false}, 00:22:36.367 "ddgst": ${ddgst:-false} 00:22:36.367 }, 00:22:36.367 "method": "bdev_nvme_attach_controller" 00:22:36.367 } 00:22:36.367 EOF 00:22:36.367 )") 00:22:36.367 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:36.367 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:36.367 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:36.367 { 00:22:36.367 "params": { 00:22:36.367 "name": "Nvme$subsystem", 00:22:36.367 "trtype": "$TEST_TRANSPORT", 00:22:36.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.367 "adrfam": "ipv4", 00:22:36.367 "trsvcid": "$NVMF_PORT", 00:22:36.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.367 "hdgst": ${hdgst:-false}, 00:22:36.367 "ddgst": ${ddgst:-false} 00:22:36.367 }, 00:22:36.367 "method": "bdev_nvme_attach_controller" 00:22:36.367 } 00:22:36.367 EOF 00:22:36.367 )") 00:22:36.367 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:36.367 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:22:36.367 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:22:36.367 12:27:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:36.367 "params": { 00:22:36.367 "name": "Nvme1", 00:22:36.367 "trtype": "tcp", 00:22:36.367 "traddr": "10.0.0.2", 00:22:36.367 "adrfam": "ipv4", 00:22:36.367 "trsvcid": "4420", 00:22:36.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:36.367 "hdgst": false, 00:22:36.367 "ddgst": false 00:22:36.367 }, 00:22:36.367 "method": "bdev_nvme_attach_controller" 00:22:36.367 },{ 00:22:36.367 "params": { 00:22:36.367 "name": "Nvme2", 00:22:36.367 "trtype": "tcp", 00:22:36.367 "traddr": "10.0.0.2", 00:22:36.367 "adrfam": "ipv4", 00:22:36.367 "trsvcid": "4420", 00:22:36.367 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:36.367 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:36.367 "hdgst": false, 00:22:36.367 "ddgst": false 00:22:36.367 }, 00:22:36.367 "method": "bdev_nvme_attach_controller" 00:22:36.367 },{ 00:22:36.367 "params": { 00:22:36.367 "name": "Nvme3", 00:22:36.367 "trtype": "tcp", 00:22:36.367 "traddr": "10.0.0.2", 00:22:36.367 "adrfam": "ipv4", 00:22:36.367 "trsvcid": "4420", 00:22:36.367 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:36.367 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:36.367 "hdgst": false, 00:22:36.367 "ddgst": false 00:22:36.367 }, 00:22:36.367 "method": "bdev_nvme_attach_controller" 00:22:36.367 },{ 00:22:36.367 "params": { 00:22:36.367 "name": "Nvme4", 00:22:36.367 "trtype": "tcp", 00:22:36.367 "traddr": "10.0.0.2", 00:22:36.367 "adrfam": "ipv4", 00:22:36.367 "trsvcid": "4420", 00:22:36.367 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:36.367 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:36.367 "hdgst": false, 00:22:36.367 "ddgst": false 00:22:36.367 }, 00:22:36.367 "method": "bdev_nvme_attach_controller" 00:22:36.367 },{ 00:22:36.367 "params": { 00:22:36.367 "name": "Nvme5", 00:22:36.367 "trtype": "tcp", 00:22:36.367 "traddr": "10.0.0.2", 00:22:36.367 "adrfam": "ipv4", 00:22:36.367 "trsvcid": "4420", 00:22:36.367 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:36.367 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:36.367 "hdgst": false, 00:22:36.367 "ddgst": false 00:22:36.367 }, 00:22:36.367 "method": "bdev_nvme_attach_controller" 00:22:36.367 },{ 00:22:36.367 "params": { 00:22:36.367 "name": "Nvme6", 00:22:36.367 "trtype": "tcp", 00:22:36.367 "traddr": "10.0.0.2", 00:22:36.367 "adrfam": "ipv4", 00:22:36.367 "trsvcid": "4420", 00:22:36.367 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:36.367 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:36.367 "hdgst": false, 00:22:36.367 "ddgst": false 00:22:36.367 }, 00:22:36.367 "method": "bdev_nvme_attach_controller" 00:22:36.367 },{ 00:22:36.367 "params": { 00:22:36.367 "name": "Nvme7", 00:22:36.367 "trtype": "tcp", 00:22:36.367 "traddr": "10.0.0.2", 00:22:36.367 "adrfam": "ipv4", 00:22:36.367 "trsvcid": "4420", 00:22:36.367 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:36.367 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:36.367 "hdgst": false, 00:22:36.367 "ddgst": false 00:22:36.367 }, 00:22:36.367 "method": "bdev_nvme_attach_controller" 00:22:36.367 },{ 00:22:36.367 "params": { 00:22:36.367 "name": "Nvme8", 00:22:36.367 "trtype": "tcp", 00:22:36.367 "traddr": "10.0.0.2", 00:22:36.367 "adrfam": "ipv4", 00:22:36.367 "trsvcid": "4420", 00:22:36.367 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:36.367 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:36.367 "hdgst": false, 00:22:36.367 "ddgst": false 00:22:36.367 }, 00:22:36.367 "method": "bdev_nvme_attach_controller" 00:22:36.367 },{ 00:22:36.367 "params": { 00:22:36.367 "name": "Nvme9", 00:22:36.367 "trtype": "tcp", 00:22:36.367 "traddr": "10.0.0.2", 00:22:36.367 "adrfam": "ipv4", 00:22:36.367 "trsvcid": "4420", 00:22:36.367 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:36.367 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:36.367 "hdgst": false, 00:22:36.367 "ddgst": false 00:22:36.367 }, 00:22:36.367 "method": "bdev_nvme_attach_controller" 00:22:36.367 },{ 00:22:36.367 "params": { 00:22:36.367 "name": "Nvme10", 00:22:36.367 "trtype": "tcp", 00:22:36.367 "traddr": "10.0.0.2", 00:22:36.367 "adrfam": "ipv4", 00:22:36.367 "trsvcid": "4420", 00:22:36.367 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:36.367 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:36.367 "hdgst": false, 00:22:36.367 "ddgst": false 00:22:36.367 }, 00:22:36.367 "method": "bdev_nvme_attach_controller" 00:22:36.367 }' 00:22:36.367 [2024-11-04 12:27:10.736480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.367 [2024-11-04 12:27:10.772987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.754 Running I/O for 10 seconds... 00:22:37.754 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:37.754 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:37.754 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:37.754 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.754 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.754 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.754 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:37.754 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:37.754 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:37.754 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:37.754 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:37.754 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:37.754 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:37.754 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:37.754 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:37.754 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.754 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.015 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.015 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:38.015 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:38.015 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:38.275 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:38.275 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:38.275 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:38.275 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:38.275 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.275 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.275 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.275 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:38.275 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:38.275 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:38.536 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:38.536 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:38.536 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:38.536 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:38.536 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.536 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.536 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.536 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:38.536 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:38.536 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:38.536 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:38.536 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:38.536 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1711384 00:22:38.536 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1711384 ']' 00:22:38.536 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1711384 00:22:38.536 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:38.536 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:38.536 12:27:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1711384 00:22:38.536 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:38.536 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:38.536 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1711384' 00:22:38.536 killing process with pid 1711384 00:22:38.536 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1711384 00:22:38.536 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1711384 00:22:38.797 Received shutdown signal, test time was about 0.994822 seconds 00:22:38.797 00:22:38.797 Latency(us) 00:22:38.797 [2024-11-04T11:27:13.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.797 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.797 Verification LBA range: start 0x0 length 0x400 00:22:38.797 Nvme1n1 : 0.99 257.64 16.10 0.00 0.00 245499.09 17585.49 244667.73 00:22:38.797 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.797 Verification LBA range: start 0x0 length 0x400 00:22:38.797 Nvme2n1 : 0.99 259.15 16.20 0.00 0.00 239390.08 19770.03 246415.36 00:22:38.797 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.797 Verification LBA range: start 0x0 length 0x400 00:22:38.797 Nvme3n1 : 0.98 261.53 16.35 0.00 0.00 232452.05 19879.25 249910.61 00:22:38.797 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.797 Verification LBA range: start 0x0 length 0x400 00:22:38.797 Nvme4n1 : 0.98 266.01 16.63 0.00 0.00 222921.01 4969.81 244667.73 00:22:38.797 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.797 Verification LBA range: start 0x0 length 0x400 00:22:38.797 Nvme5n1 : 0.96 200.39 12.52 0.00 0.00 290496.00 20534.61 244667.73 00:22:38.797 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.797 Verification LBA range: start 0x0 length 0x400 00:22:38.797 Nvme6n1 : 0.98 259.99 16.25 0.00 0.00 219726.51 19551.57 249910.61 00:22:38.797 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.797 Verification LBA range: start 0x0 length 0x400 00:22:38.797 Nvme7n1 : 0.97 267.69 16.73 0.00 0.00 208325.29 16930.13 223696.21 00:22:38.797 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.797 Verification LBA range: start 0x0 length 0x400 00:22:38.797 Nvme8n1 : 0.99 258.23 16.14 0.00 0.00 211968.00 23156.05 244667.73 00:22:38.797 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.797 Verification LBA range: start 0x0 length 0x400 00:22:38.797 Nvme9n1 : 0.97 196.94 12.31 0.00 0.00 270951.25 19005.44 267386.88 00:22:38.797 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.797 Verification LBA range: start 0x0 length 0x400 00:22:38.797 Nvme10n1 : 0.96 206.26 12.89 0.00 0.00 250823.47 3932.16 249910.61 00:22:38.797 [2024-11-04T11:27:13.367Z] =================================================================================================================== 00:22:38.797 [2024-11-04T11:27:13.367Z] Total : 2433.82 152.11 0.00 0.00 236666.04 3932.16 267386.88 00:22:38.797 12:27:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:39.740 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1710715 00:22:39.740 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:39.740 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:39.740 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:39.740 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:39.740 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:39.740 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:39.740 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:39.740 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:39.740 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:39.740 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:39.740 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:39.740 rmmod nvme_tcp 00:22:39.740 rmmod nvme_fabrics 00:22:40.001 rmmod nvme_keyring 00:22:40.001 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:40.001 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:40.001 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:40.001 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 1710715 ']' 00:22:40.001 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 1710715 00:22:40.001 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1710715 ']' 00:22:40.001 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1710715 00:22:40.001 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:40.001 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:40.001 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1710715 00:22:40.001 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:40.001 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:40.001 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1710715' 00:22:40.001 killing process with pid 1710715 00:22:40.001 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1710715 00:22:40.001 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1710715 00:22:40.262 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:40.262 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:40.262 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:40.262 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:40.262 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:22:40.262 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:22:40.262 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:40.262 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:40.262 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:40.262 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.262 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.262 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.174 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:42.174 00:22:42.174 real 0m7.897s 00:22:42.174 user 0m23.800s 00:22:42.174 sys 0m1.330s 00:22:42.174 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:42.174 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.174 ************************************ 00:22:42.174 END TEST nvmf_shutdown_tc2 00:22:42.174 ************************************ 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:42.435 ************************************ 00:22:42.435 START TEST nvmf_shutdown_tc3 00:22:42.435 ************************************ 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:42.435 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:42.435 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.435 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:42.436 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:42.436 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:42.436 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:42.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:22:42.697 00:22:42.697 --- 10.0.0.2 ping statistics --- 00:22:42.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.697 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:22:42.697 00:22:42.697 --- 10.0.0.1 ping statistics --- 00:22:42.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.697 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=1712853 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 1712853 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1712853 ']' 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:42.697 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.697 [2024-11-04 12:27:17.246307] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:22:42.697 [2024-11-04 12:27:17.246376] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.958 [2024-11-04 12:27:17.335951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:42.958 [2024-11-04 12:27:17.377814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.958 [2024-11-04 12:27:17.377858] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.958 [2024-11-04 12:27:17.377864] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.958 [2024-11-04 12:27:17.377869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.958 [2024-11-04 12:27:17.377874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.958 [2024-11-04 12:27:17.379516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.958 [2024-11-04 12:27:17.379680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:42.958 [2024-11-04 12:27:17.379831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:42.958 [2024-11-04 12:27:17.380015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.529 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:43.529 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:43.529 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:43.529 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.529 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:43.529 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.529 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:43.529 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.529 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:43.789 [2024-11-04 12:27:18.098414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.789 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:43.789 Malloc1 00:22:43.789 [2024-11-04 12:27:18.209753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.789 Malloc2 00:22:43.789 Malloc3 00:22:43.789 Malloc4 00:22:43.789 Malloc5 00:22:44.049 Malloc6 00:22:44.049 Malloc7 00:22:44.049 Malloc8 00:22:44.049 Malloc9 00:22:44.049 Malloc10 00:22:44.049 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.049 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:44.050 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:44.050 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.050 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1713086 00:22:44.050 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1713086 /var/tmp/bdevperf.sock 00:22:44.050 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1713086 ']' 00:22:44.050 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:44.050 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:44.050 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:44.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:44.050 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:44.050 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:44.050 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:44.050 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.050 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:22:44.050 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:22:44.050 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:44.050 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:44.050 { 00:22:44.050 "params": { 00:22:44.050 "name": "Nvme$subsystem", 00:22:44.050 "trtype": "$TEST_TRANSPORT", 00:22:44.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.050 "adrfam": "ipv4", 00:22:44.050 "trsvcid": "$NVMF_PORT", 00:22:44.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.050 "hdgst": ${hdgst:-false}, 00:22:44.050 "ddgst": ${ddgst:-false} 00:22:44.050 }, 00:22:44.050 "method": "bdev_nvme_attach_controller" 00:22:44.050 } 00:22:44.050 EOF 00:22:44.050 )") 00:22:44.050 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:44.311 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:44.311 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:44.311 { 00:22:44.311 "params": { 00:22:44.311 "name": "Nvme$subsystem", 00:22:44.311 "trtype": "$TEST_TRANSPORT", 00:22:44.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.311 "adrfam": "ipv4", 00:22:44.311 "trsvcid": "$NVMF_PORT", 00:22:44.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.311 "hdgst": ${hdgst:-false}, 00:22:44.311 "ddgst": ${ddgst:-false} 00:22:44.311 }, 00:22:44.311 "method": "bdev_nvme_attach_controller" 00:22:44.311 } 00:22:44.311 EOF 00:22:44.311 )") 00:22:44.311 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:44.311 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:44.311 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:44.311 { 00:22:44.311 "params": { 00:22:44.311 "name": "Nvme$subsystem", 00:22:44.311 "trtype": "$TEST_TRANSPORT", 00:22:44.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.311 "adrfam": "ipv4", 00:22:44.311 "trsvcid": "$NVMF_PORT", 00:22:44.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.311 "hdgst": ${hdgst:-false}, 00:22:44.311 "ddgst": ${ddgst:-false} 00:22:44.311 }, 00:22:44.311 "method": "bdev_nvme_attach_controller" 00:22:44.311 } 00:22:44.311 EOF 00:22:44.311 )") 00:22:44.311 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:44.311 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:44.311 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:44.311 { 00:22:44.311 "params": { 00:22:44.311 "name": "Nvme$subsystem", 00:22:44.311 "trtype": "$TEST_TRANSPORT", 00:22:44.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.311 "adrfam": "ipv4", 00:22:44.311 "trsvcid": "$NVMF_PORT", 00:22:44.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.311 "hdgst": ${hdgst:-false}, 00:22:44.311 "ddgst": ${ddgst:-false} 00:22:44.311 }, 00:22:44.311 "method": "bdev_nvme_attach_controller" 00:22:44.311 } 00:22:44.311 EOF 00:22:44.311 )") 00:22:44.311 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:44.311 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:44.311 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:44.311 { 00:22:44.311 "params": { 00:22:44.311 "name": "Nvme$subsystem", 00:22:44.311 "trtype": "$TEST_TRANSPORT", 00:22:44.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.311 "adrfam": "ipv4", 00:22:44.311 "trsvcid": "$NVMF_PORT", 00:22:44.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.311 "hdgst": ${hdgst:-false}, 00:22:44.311 "ddgst": ${ddgst:-false} 00:22:44.311 }, 00:22:44.311 "method": "bdev_nvme_attach_controller" 00:22:44.311 } 00:22:44.311 EOF 00:22:44.311 )") 00:22:44.311 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:44.311 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:44.311 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:44.311 { 00:22:44.311 "params": { 00:22:44.311 "name": "Nvme$subsystem", 00:22:44.311 "trtype": "$TEST_TRANSPORT", 00:22:44.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.311 "adrfam": "ipv4", 00:22:44.311 "trsvcid": "$NVMF_PORT", 00:22:44.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.311 "hdgst": ${hdgst:-false}, 00:22:44.311 "ddgst": ${ddgst:-false} 00:22:44.311 }, 00:22:44.311 "method": "bdev_nvme_attach_controller" 00:22:44.311 } 00:22:44.311 EOF 00:22:44.311 )") 00:22:44.311 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:44.311 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:44.311 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:44.311 { 00:22:44.311 "params": { 00:22:44.311 "name": "Nvme$subsystem", 00:22:44.311 "trtype": "$TEST_TRANSPORT", 00:22:44.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.311 "adrfam": "ipv4", 00:22:44.311 "trsvcid": "$NVMF_PORT", 00:22:44.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.312 "hdgst": ${hdgst:-false}, 00:22:44.312 "ddgst": ${ddgst:-false} 00:22:44.312 }, 00:22:44.312 "method": "bdev_nvme_attach_controller" 00:22:44.312 } 00:22:44.312 EOF 00:22:44.312 )") 00:22:44.312 [2024-11-04 12:27:18.660931] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:22:44.312 [2024-11-04 12:27:18.660986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1713086 ] 00:22:44.312 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:44.312 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:44.312 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:44.312 { 00:22:44.312 "params": { 00:22:44.312 "name": "Nvme$subsystem", 00:22:44.312 "trtype": "$TEST_TRANSPORT", 00:22:44.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.312 "adrfam": "ipv4", 00:22:44.312 "trsvcid": "$NVMF_PORT", 00:22:44.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.312 "hdgst": ${hdgst:-false}, 00:22:44.312 "ddgst": ${ddgst:-false} 00:22:44.312 }, 00:22:44.312 "method": "bdev_nvme_attach_controller" 00:22:44.312 } 00:22:44.312 EOF 00:22:44.312 )") 00:22:44.312 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:44.312 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:44.312 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:44.312 { 00:22:44.312 "params": { 00:22:44.312 "name": "Nvme$subsystem", 00:22:44.312 "trtype": "$TEST_TRANSPORT", 00:22:44.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.312 "adrfam": "ipv4", 00:22:44.312 "trsvcid": "$NVMF_PORT", 00:22:44.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.312 "hdgst": ${hdgst:-false}, 00:22:44.312 "ddgst": ${ddgst:-false} 00:22:44.312 }, 00:22:44.312 "method": "bdev_nvme_attach_controller" 00:22:44.312 } 00:22:44.312 EOF 00:22:44.312 )") 00:22:44.312 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:44.312 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:44.312 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:44.312 { 00:22:44.312 "params": { 00:22:44.312 "name": "Nvme$subsystem", 00:22:44.312 "trtype": "$TEST_TRANSPORT", 00:22:44.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.312 "adrfam": "ipv4", 00:22:44.312 "trsvcid": "$NVMF_PORT", 00:22:44.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.312 "hdgst": ${hdgst:-false}, 00:22:44.312 "ddgst": ${ddgst:-false} 00:22:44.312 }, 00:22:44.312 "method": "bdev_nvme_attach_controller" 00:22:44.312 } 00:22:44.312 EOF 00:22:44.312 )") 00:22:44.312 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:44.312 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:22:44.312 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:22:44.312 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:44.312 "params": { 00:22:44.312 "name": "Nvme1", 00:22:44.312 "trtype": "tcp", 00:22:44.312 "traddr": "10.0.0.2", 00:22:44.312 "adrfam": "ipv4", 00:22:44.312 "trsvcid": "4420", 00:22:44.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:44.312 "hdgst": false, 00:22:44.312 "ddgst": false 00:22:44.312 }, 00:22:44.312 "method": "bdev_nvme_attach_controller" 00:22:44.312 },{ 00:22:44.312 "params": { 00:22:44.312 "name": "Nvme2", 00:22:44.312 "trtype": "tcp", 00:22:44.312 "traddr": "10.0.0.2", 00:22:44.312 "adrfam": "ipv4", 00:22:44.312 "trsvcid": "4420", 00:22:44.312 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:44.312 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:44.312 "hdgst": false, 00:22:44.312 "ddgst": false 00:22:44.312 }, 00:22:44.312 "method": "bdev_nvme_attach_controller" 00:22:44.312 },{ 00:22:44.312 "params": { 00:22:44.312 "name": "Nvme3", 00:22:44.312 "trtype": "tcp", 00:22:44.312 "traddr": "10.0.0.2", 00:22:44.312 "adrfam": "ipv4", 00:22:44.312 "trsvcid": "4420", 00:22:44.312 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:44.312 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:44.312 "hdgst": false, 00:22:44.312 "ddgst": false 00:22:44.312 }, 00:22:44.312 "method": "bdev_nvme_attach_controller" 00:22:44.312 },{ 00:22:44.312 "params": { 00:22:44.312 "name": "Nvme4", 00:22:44.312 "trtype": "tcp", 00:22:44.312 "traddr": "10.0.0.2", 00:22:44.312 "adrfam": "ipv4", 00:22:44.312 "trsvcid": "4420", 00:22:44.312 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:44.312 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:44.312 "hdgst": false, 00:22:44.312 "ddgst": false 00:22:44.312 }, 00:22:44.312 "method": "bdev_nvme_attach_controller" 00:22:44.312 },{ 00:22:44.312 "params": { 00:22:44.312 "name": "Nvme5", 00:22:44.312 "trtype": "tcp", 00:22:44.312 "traddr": "10.0.0.2", 00:22:44.312 "adrfam": "ipv4", 00:22:44.312 "trsvcid": "4420", 00:22:44.312 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:44.312 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:44.312 "hdgst": false, 00:22:44.312 "ddgst": false 00:22:44.312 }, 00:22:44.312 "method": "bdev_nvme_attach_controller" 00:22:44.312 },{ 00:22:44.312 "params": { 00:22:44.312 "name": "Nvme6", 00:22:44.312 "trtype": "tcp", 00:22:44.312 "traddr": "10.0.0.2", 00:22:44.312 "adrfam": "ipv4", 00:22:44.312 "trsvcid": "4420", 00:22:44.312 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:44.312 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:44.312 "hdgst": false, 00:22:44.312 "ddgst": false 00:22:44.312 }, 00:22:44.312 "method": "bdev_nvme_attach_controller" 00:22:44.312 },{ 00:22:44.312 "params": { 00:22:44.312 "name": "Nvme7", 00:22:44.312 "trtype": "tcp", 00:22:44.312 "traddr": "10.0.0.2", 00:22:44.312 "adrfam": "ipv4", 00:22:44.312 "trsvcid": "4420", 00:22:44.312 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:44.312 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:44.312 "hdgst": false, 00:22:44.312 "ddgst": false 00:22:44.312 }, 00:22:44.312 "method": "bdev_nvme_attach_controller" 00:22:44.312 },{ 00:22:44.312 "params": { 00:22:44.312 "name": "Nvme8", 00:22:44.312 "trtype": "tcp", 00:22:44.312 "traddr": "10.0.0.2", 00:22:44.312 "adrfam": "ipv4", 00:22:44.312 "trsvcid": "4420", 00:22:44.312 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:44.312 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:44.312 "hdgst": false, 00:22:44.312 "ddgst": false 00:22:44.312 }, 00:22:44.312 "method": "bdev_nvme_attach_controller" 00:22:44.312 },{ 00:22:44.312 "params": { 00:22:44.312 "name": "Nvme9", 00:22:44.312 "trtype": "tcp", 00:22:44.312 "traddr": "10.0.0.2", 00:22:44.312 "adrfam": "ipv4", 00:22:44.312 "trsvcid": "4420", 00:22:44.312 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:44.312 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:44.312 "hdgst": false, 00:22:44.312 "ddgst": false 00:22:44.312 }, 00:22:44.312 "method": "bdev_nvme_attach_controller" 00:22:44.312 },{ 00:22:44.312 "params": { 00:22:44.312 "name": "Nvme10", 00:22:44.312 "trtype": "tcp", 00:22:44.312 "traddr": "10.0.0.2", 00:22:44.312 "adrfam": "ipv4", 00:22:44.312 "trsvcid": "4420", 00:22:44.312 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:44.312 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:44.312 "hdgst": false, 00:22:44.312 "ddgst": false 00:22:44.312 }, 00:22:44.312 "method": "bdev_nvme_attach_controller" 00:22:44.312 }' 00:22:44.312 [2024-11-04 12:27:18.721999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.312 [2024-11-04 12:27:18.758459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.694 Running I/O for 10 seconds... 00:22:45.694 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.694 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:45.694 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:45.694 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.694 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:45.954 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.954 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:45.954 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:45.954 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:45.954 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:45.954 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:45.954 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:45.954 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:45.954 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:45.954 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:45.954 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:45.954 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.954 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:45.954 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.954 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:45.954 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:45.954 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:46.215 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:46.215 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:46.215 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:46.215 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:46.215 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.215 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.215 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.215 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:46.215 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:46.215 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:46.475 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:46.475 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:46.475 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:46.475 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:46.475 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.475 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.748 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.748 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:46.748 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:46.748 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:46.748 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:46.748 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:46.748 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1712853 00:22:46.748 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1712853 ']' 00:22:46.748 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1712853 00:22:46.748 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:46.748 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:46.748 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1712853 00:22:46.748 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:46.748 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:46.748 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1712853' 00:22:46.748 killing process with pid 1712853 00:22:46.748 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1712853 00:22:46.748 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1712853 00:22:46.748 [2024-11-04 12:27:21.141178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.748 [2024-11-04 12:27:21.141437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.749 [2024-11-04 12:27:21.141442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.749 [2024-11-04 12:27:21.141446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.749 [2024-11-04 12:27:21.141451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.749 [2024-11-04 12:27:21.141456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795620 is same with the state(6) to be set 00:22:46.749 [2024-11-04 12:27:21.141992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.749 [2024-11-04 12:27:21.142670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.749 [2024-11-04 12:27:21.142680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.750 [2024-11-04 12:27:21.142687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.750 [2024-11-04 12:27:21.142697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.750 [2024-11-04 12:27:21.142704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.750 [2024-11-04 12:27:21.142714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.750 [2024-11-04 12:27:21.142722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.750 [2024-11-04 12:27:21.142732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.750 [2024-11-04 12:27:21.142739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.750 [2024-11-04 12:27:21.142741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with [2024-11-04 12:27:21.142757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:1the state(6) to be set 00:22:46.750 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.750 [2024-11-04 12:27:21.142773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.750 [2024-11-04 12:27:21.142778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.750 [2024-11-04 12:27:21.142786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-04 12:27:21.142792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.750 the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.750 [2024-11-04 12:27:21.142806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with [2024-11-04 12:27:21.142811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:46.750 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.750 [2024-11-04 12:27:21.142819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with [2024-11-04 12:27:21.142823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:1the state(6) to be set 00:22:46.750 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.750 [2024-11-04 12:27:21.142831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.750 [2024-11-04 12:27:21.142836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.750 [2024-11-04 12:27:21.142847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-04 12:27:21.142853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.750 the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.750 [2024-11-04 12:27:21.142865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.750 [2024-11-04 12:27:21.142880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with [2024-11-04 12:27:21.142886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1the state(6) to be set 00:22:46.750 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.750 [2024-11-04 12:27:21.142894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.750 [2024-11-04 12:27:21.142899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.750 [2024-11-04 12:27:21.142910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with [2024-11-04 12:27:21.142915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:46.750 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.750 [2024-11-04 12:27:21.142923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.750 [2024-11-04 12:27:21.142934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-04 12:27:21.142939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.750 the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.750 [2024-11-04 12:27:21.142952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.750 [2024-11-04 12:27:21.142962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.750 [2024-11-04 12:27:21.142974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.750 [2024-11-04 12:27:21.142980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.750 [2024-11-04 12:27:21.142992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.142995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-04 12:27:21.142997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.750 the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.143004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.143007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:1[2024-11-04 12:27:21.143009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.750 the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.143015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.143016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.750 [2024-11-04 12:27:21.143020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.143025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.143027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.750 [2024-11-04 12:27:21.143030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.143034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-04 12:27:21.143035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.750 the state(6) to be set 00:22:46.750 [2024-11-04 12:27:21.143044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.143046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.143048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.143055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with [2024-11-04 12:27:21.143054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:46.751 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.143062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.143067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with [2024-11-04 12:27:21.143066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1the state(6) to be set 00:22:46.751 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.143075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.143078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.143080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.143086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.143088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.143091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.143096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.143096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.143102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.143107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.143107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.143112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.143115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-04 12:27:21.143117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.143124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.143127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1[2024-11-04 12:27:21.143128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.143135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.143136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.143141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17981b0 is same with the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.143146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.143154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.143163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.143170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.143179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.143187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.143199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.143206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.143232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:46.751 [2024-11-04 12:27:21.143276] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11e2940 was disconnected and freed. reset controller. 00:22:46.751 [2024-11-04 12:27:21.144258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795af0 is same with the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.144275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795af0 is same with the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.145099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.145123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.145138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.145148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.145159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.145168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.145179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.145189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.145200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.145209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.145221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.145231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.145242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.145252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.145263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.145270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.145279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.145288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.145298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.145309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.145318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.145325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.145334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.145342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.145352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.145359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.145368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.145376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.145385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.145393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.145404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128[2024-11-04 12:27:21.145393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.145415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 [2024-11-04 12:27:21.145421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.145424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.145427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.145432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-04 12:27:21.145433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.751 the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.145441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.751 [2024-11-04 12:27:21.145444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.751 [2024-11-04 12:27:21.145447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-04 12:27:21.145453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.752 the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128[2024-11-04 12:27:21.145466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.752 the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.752 [2024-11-04 12:27:21.145478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.752 [2024-11-04 12:27:21.145488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.752 [2024-11-04 12:27:21.145494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.752 [2024-11-04 12:27:21.145505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.752 [2024-11-04 12:27:21.145515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.752 [2024-11-04 12:27:21.145525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-04 12:27:21.145530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.752 the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:12[2024-11-04 12:27:21.145542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.752 the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.752 [2024-11-04 12:27:21.145554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with [2024-11-04 12:27:21.145560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:12the state(6) to be set 00:22:46.752 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.752 [2024-11-04 12:27:21.145568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.752 [2024-11-04 12:27:21.145574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.752 [2024-11-04 12:27:21.145584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-04 12:27:21.145589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.752 the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.752 [2024-11-04 12:27:21.145601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.752 [2024-11-04 12:27:21.145612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.752 [2024-11-04 12:27:21.145624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.752 [2024-11-04 12:27:21.145629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.752 [2024-11-04 12:27:21.145639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.752 [2024-11-04 12:27:21.145650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with [2024-11-04 12:27:21.145655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:12the state(6) to be set 00:22:46.752 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.752 [2024-11-04 12:27:21.145665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.752 [2024-11-04 12:27:21.145671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.752 [2024-11-04 12:27:21.145681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-04 12:27:21.145686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.752 the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.752 [2024-11-04 12:27:21.145696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.752 [2024-11-04 12:27:21.145698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.753 [2024-11-04 12:27:21.145704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.753 [2024-11-04 12:27:21.145704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.145709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.753 [2024-11-04 12:27:21.145714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with [2024-11-04 12:27:21.145714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:12the state(6) to be set 00:22:46.753 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.145721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.753 [2024-11-04 12:27:21.145724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-04 12:27:21.145726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 the state(6) to be set 00:22:46.753 [2024-11-04 12:27:21.145733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.753 [2024-11-04 12:27:21.145736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.145737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.753 [2024-11-04 12:27:21.145743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.753 [2024-11-04 12:27:21.145744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.145752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.753 [2024-11-04 12:27:21.145759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.753 [2024-11-04 12:27:21.145758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.145764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.753 [2024-11-04 12:27:21.145767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.145770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.753 [2024-11-04 12:27:21.145775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.753 [2024-11-04 12:27:21.145777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.145781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795fc0 is same with the state(6) to be set 00:22:46.753 [2024-11-04 12:27:21.145785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.145796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.145803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.145812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.145823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.145833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.145841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.145850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.145858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.145868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.145875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.145884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.145892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.145901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.145908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.145918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.145925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.145936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.145943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.145953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.145961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.145970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.145977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.145986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.145994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.146003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.146011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.146020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.146027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.146036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.146043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.146053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.146061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.146070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.146077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.146086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.146093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.146103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.146111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.146120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.146127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.146136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.146145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.146155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.146162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.146171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.146179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.146188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.146195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.146204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.146212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.146221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.146228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.146237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.146244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.146253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.753 [2024-11-04 12:27:21.146261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.753 [2024-11-04 12:27:21.146271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.754 [2024-11-04 12:27:21.146279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.754 [2024-11-04 12:27:21.146326] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfef2c0 was disconnected and freed. reset controller. 00:22:46.754 [2024-11-04 12:27:21.146621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:46.754 [2024-11-04 12:27:21.146675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb2d70 (9): Bad file descriptor 00:22:46.754 [2024-11-04 12:27:21.147058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.147379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17964b0 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.148102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.148118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.148124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.148129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.148134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.148139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.148144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.148149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.148154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.148159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.148164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.148169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.148173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.148179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.148174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controlle[2024-11-04 12:27:21.148184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with r 00:22:46.754 the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.148191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.754 [2024-11-04 12:27:21.148196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca9a50 (9): B[2024-11-04 12:27:21.148225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with ad file descriptor 00:22:46.755 the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.148425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796980 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.755 [2024-11-04 12:27:21.149082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb2d70 with addr=10.0.0.2, port=4420 00:22:46.755 [2024-11-04 12:27:21.149092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb2d70 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.755 [2024-11-04 12:27:21.149636] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:46.755 [2024-11-04 12:27:21.150112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.755 [2024-11-04 12:27:21.150150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca9a50 with addr=10.0.0.2, port=4420 00:22:46.756 [2024-11-04 12:27:21.150161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9a50 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.150177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb2d70 (9): Bad file descriptor 00:22:46.756 [2024-11-04 12:27:21.150253] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:46.756 [2024-11-04 12:27:21.150482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca9a50 (9): Bad file descriptor 00:22:46.756 [2024-11-04 12:27:21.150499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:46.756 [2024-11-04 12:27:21.150511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:46.756 [2024-11-04 12:27:21.150520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:46.756 [2024-11-04 12:27:21.150583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.756 [2024-11-04 12:27:21.150596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.756 [2024-11-04 12:27:21.150605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.756 [2024-11-04 12:27:21.150612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.756 [2024-11-04 12:27:21.150621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.756 [2024-11-04 12:27:21.150629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.756 [2024-11-04 12:27:21.150637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.756 [2024-11-04 12:27:21.150644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.756 [2024-11-04 12:27:21.150651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dcd10 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.150683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.756 [2024-11-04 12:27:21.150693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.756 [2024-11-04 12:27:21.150701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.756 [2024-11-04 12:27:21.150708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.756 [2024-11-04 12:27:21.150716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.756 [2024-11-04 12:27:21.150723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.756 [2024-11-04 12:27:21.150732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.756 [2024-11-04 12:27:21.150739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.756 [2024-11-04 12:27:21.150754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dd030 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.150780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.756 [2024-11-04 12:27:21.150788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.756 [2024-11-04 12:27:21.150797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.756 [2024-11-04 12:27:21.150804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.756 [2024-11-04 12:27:21.150812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.756 [2024-11-04 12:27:21.150823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.756 [2024-11-04 12:27:21.150831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.756 [2024-11-04 12:27:21.150838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.756 [2024-11-04 12:27:21.150845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb07f0 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.150869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.756 [2024-11-04 12:27:21.150877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.756 [2024-11-04 12:27:21.150887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.756 [2024-11-04 12:27:21.150895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.756 [2024-11-04 12:27:21.150903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.756 [2024-11-04 12:27:21.150910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.756 [2024-11-04 12:27:21.150918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.756 [2024-11-04 12:27:21.150925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.756 [2024-11-04 12:27:21.150932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11032a0 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.151000] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:46.756 [2024-11-04 12:27:21.151188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:46.756 [2024-11-04 12:27:21.151204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:46.756 [2024-11-04 12:27:21.151212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:46.756 [2024-11-04 12:27:21.151219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:46.756 [2024-11-04 12:27:21.151469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:46.756 [2024-11-04 12:27:21.151515] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:46.756 [2024-11-04 12:27:21.158382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:46.756 [2024-11-04 12:27:21.158996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.756 [2024-11-04 12:27:21.159037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb2d70 with addr=10.0.0.2, port=4420 00:22:46.756 [2024-11-04 12:27:21.159050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb2d70 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.159172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb2d70 (9): Bad file descriptor 00:22:46.756 [2024-11-04 12:27:21.159288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:46.756 [2024-11-04 12:27:21.159299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:46.756 [2024-11-04 12:27:21.159308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:46.756 [2024-11-04 12:27:21.159402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:46.756 [2024-11-04 12:27:21.159421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:46.756 [2024-11-04 12:27:21.159836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.756 [2024-11-04 12:27:21.159854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca9a50 with addr=10.0.0.2, port=4420 00:22:46.756 [2024-11-04 12:27:21.159862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9a50 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.159963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca9a50 (9): Bad file descriptor 00:22:46.756 [2024-11-04 12:27:21.160051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:46.756 [2024-11-04 12:27:21.160060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:46.756 [2024-11-04 12:27:21.160068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:46.756 [2024-11-04 12:27:21.160179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:46.756 [2024-11-04 12:27:21.160546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10dcd10 (9): Bad file descriptor 00:22:46.756 [2024-11-04 12:27:21.160580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10dd030 (9): Bad file descriptor 00:22:46.756 [2024-11-04 12:27:21.160598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb07f0 (9): Bad file descriptor 00:22:46.756 [2024-11-04 12:27:21.160615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11032a0 (9): Bad file descriptor 00:22:46.756 [2024-11-04 12:27:21.165444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.165467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.165475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.165483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.165490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.165496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.165503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.165510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.165516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.165522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.165529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.165536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.165542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.165549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.756 [2024-11-04 12:27:21.165556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.757 [2024-11-04 12:27:21.165567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.757 [2024-11-04 12:27:21.165572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.757 [2024-11-04 12:27:21.165577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.757 [2024-11-04 12:27:21.165583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.757 [2024-11-04 12:27:21.165589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.757 [2024-11-04 12:27:21.165594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.757 [2024-11-04 12:27:21.165599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.757 [2024-11-04 12:27:21.165605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.757 [2024-11-04 12:27:21.165610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.757 [2024-11-04 12:27:21.165615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.757 [2024-11-04 12:27:21.165620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.757 [2024-11-04 12:27:21.165627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.757 [2024-11-04 12:27:21.165632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.757 [2024-11-04 12:27:21.165638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.757 [2024-11-04 12:27:21.165643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.757 [2024-11-04 12:27:21.165648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1796e50 is same with the state(6) to be set 00:22:46.757 [2024-11-04 12:27:21.165864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.165886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.165901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.165909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.165920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.165928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.165937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.165945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.165955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.165962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.165972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.165984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.165993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.757 [2024-11-04 12:27:21.166334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.757 [2024-11-04 12:27:21.166343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.758 [2024-11-04 12:27:21.166351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.166362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.758 [2024-11-04 12:27:21.166369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.166379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.758 [2024-11-04 12:27:21.166387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.166397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.758 [2024-11-04 12:27:21.166405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.166416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.758 [2024-11-04 12:27:21.166425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.166435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.758 [2024-11-04 12:27:21.166443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.166453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.758 [2024-11-04 12:27:21.166460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.166471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.758 [2024-11-04 12:27:21.166478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.166487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.758 [2024-11-04 12:27:21.166494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.166505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.758 [2024-11-04 12:27:21.166513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.166522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.758 [2024-11-04 12:27:21.166531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.166541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.758 [2024-11-04 12:27:21.166550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.166559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.758 [2024-11-04 12:27:21.166566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.166576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.758 [2024-11-04 12:27:21.166584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.166593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.758 [2024-11-04 12:27:21.166601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.166610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.758 [2024-11-04 12:27:21.166618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.166629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.758 [2024-11-04 12:27:21.166636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.166647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.758 [2024-11-04 12:27:21.166655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.166664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b82f0 is same with the state(6) to be set 00:22:46.758 [2024-11-04 12:27:21.166701] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10b82f0 was disconnected and freed. reset controller. 00:22:46.758 [2024-11-04 12:27:21.167864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:46.758 [2024-11-04 12:27:21.167910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcb610 (9): Bad file descriptor 00:22:46.758 [2024-11-04 12:27:21.168562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.758 [2024-11-04 12:27:21.168578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcb610 with addr=10.0.0.2, port=4420 00:22:46.758 [2024-11-04 12:27:21.168586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcb610 is same with the state(6) to be set 00:22:46.758 [2024-11-04 12:27:21.168624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcb610 (9): Bad file descriptor 00:22:46.758 [2024-11-04 12:27:21.168678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:46.758 [2024-11-04 12:27:21.168688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:46.758 [2024-11-04 12:27:21.168695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:46.758 [2024-11-04 12:27:21.168729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:46.758 [2024-11-04 12:27:21.168739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:46.758 [2024-11-04 12:27:21.169089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.758 [2024-11-04 12:27:21.169102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb2d70 with addr=10.0.0.2, port=4420 00:22:46.758 [2024-11-04 12:27:21.169110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb2d70 is same with the state(6) to be set 00:22:46.758 [2024-11-04 12:27:21.169140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb2d70 (9): Bad file descriptor 00:22:46.758 [2024-11-04 12:27:21.169170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:46.758 [2024-11-04 12:27:21.169178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:46.758 [2024-11-04 12:27:21.169186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:46.758 [2024-11-04 12:27:21.169218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:46.758 [2024-11-04 12:27:21.169524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:46.758 [2024-11-04 12:27:21.169858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.758 [2024-11-04 12:27:21.169872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca9a50 with addr=10.0.0.2, port=4420 00:22:46.758 [2024-11-04 12:27:21.169880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9a50 is same with the state(6) to be set 00:22:46.758 [2024-11-04 12:27:21.169910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca9a50 (9): Bad file descriptor 00:22:46.758 [2024-11-04 12:27:21.169940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:46.758 [2024-11-04 12:27:21.169955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:46.758 [2024-11-04 12:27:21.169963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:46.758 [2024-11-04 12:27:21.169995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:46.758 [2024-11-04 12:27:21.170567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.758 [2024-11-04 12:27:21.170578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.170587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.758 [2024-11-04 12:27:21.170594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.170603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.758 [2024-11-04 12:27:21.170611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.170620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.758 [2024-11-04 12:27:21.170628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.170635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11045e0 is same with the state(6) to be set 00:22:46.758 [2024-11-04 12:27:21.170661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.758 [2024-11-04 12:27:21.170669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.170679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.758 [2024-11-04 12:27:21.170686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.170695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.758 [2024-11-04 12:27:21.170702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.170711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.758 [2024-11-04 12:27:21.170719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.758 [2024-11-04 12:27:21.170726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d3cf0 is same with the state(6) to be set 00:22:46.758 [2024-11-04 12:27:21.170753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.759 [2024-11-04 12:27:21.170763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.170772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.759 [2024-11-04 12:27:21.170779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.170788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.759 [2024-11-04 12:27:21.170798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.170806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.759 [2024-11-04 12:27:21.170814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.170822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb0270 is same with the state(6) to be set 00:22:46.759 [2024-11-04 12:27:21.170920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.170931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.170943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.170951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.170962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.170969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.170979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.170987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.170997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.759 [2024-11-04 12:27:21.171560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.759 [2024-11-04 12:27:21.171570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.171984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.171991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.172001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.172010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.172021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.172028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.172040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.172048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.172058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.172066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.172076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.172083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.172093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109c640 is same with the state(6) to be set 00:22:46.760 [2024-11-04 12:27:21.173368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.173382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.173394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.173403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.173413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.173422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.173431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.173439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.173449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.173457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.173467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.173475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.173485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.173494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.173504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.173511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.173522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.173530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.173543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.173551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.173560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.760 [2024-11-04 12:27:21.173569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.760 [2024-11-04 12:27:21.173579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.173987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.173999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.174008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.174018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.174025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.174035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.174044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.174054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.174062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.174072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.174080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.174091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.174098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.174108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.761 [2024-11-04 12:27:21.174116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.761 [2024-11-04 12:27:21.174125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.174534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.174543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9a00 is same with the state(6) to be set 00:22:46.762 [2024-11-04 12:27:21.175806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.175821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.175834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.175845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.175857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.175867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.175879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.175889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.175900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.175909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.175920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.175928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.175938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.175946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.175959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.175966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.175977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.175984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.175995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.176003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.176013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.176021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.176032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.176039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.176050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.176058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.176068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.176076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.176087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.176095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.176105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.762 [2024-11-04 12:27:21.176113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.762 [2024-11-04 12:27:21.176122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.763 [2024-11-04 12:27:21.176715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.763 [2024-11-04 12:27:21.176726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.176733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.176743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.176755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.176765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.176774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.176785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.176793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.176803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.176810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.176821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.176829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.176839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.176847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.176857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.176865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.176877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.176885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.176896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.176904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.176915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.176923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.176933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.176941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.176951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.176959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.176969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.176977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.176988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.176996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.177004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b6dc0 is same with the state(6) to be set 00:22:46.764 [2024-11-04 12:27:21.178273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.764 [2024-11-04 12:27:21.178725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.764 [2024-11-04 12:27:21.178735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.178744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.178758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.178766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.178776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.178784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.178794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.178803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.178813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.178821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.178830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.178839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.178848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.178858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.178868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.178876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.178885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.178893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.178903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.178911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.178921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.178929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.178939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.178948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.178958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.178966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.178976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.178984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.178993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.765 [2024-11-04 12:27:21.179461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.765 [2024-11-04 12:27:21.179470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dbf50 is same with the state(6) to be set 00:22:46.766 [2024-11-04 12:27:21.180978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:46.766 [2024-11-04 12:27:21.181001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:46.766 [2024-11-04 12:27:21.181012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:46.766 task offset: 24576 on job bdev=Nvme1n1 fails 00:22:46.766 00:22:46.766 Latency(us) 00:22:46.766 [2024-11-04T11:27:21.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.766 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.766 Job: Nvme1n1 ended in about 0.94 seconds with error 00:22:46.766 Verification LBA range: start 0x0 length 0x400 00:22:46.766 Nvme1n1 : 0.94 203.93 12.75 67.98 0.00 232675.31 3577.17 227191.47 00:22:46.766 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.766 Job: Nvme2n1 ended in about 0.94 seconds with error 00:22:46.766 Verification LBA range: start 0x0 length 0x400 00:22:46.766 Nvme2n1 : 0.94 203.31 12.71 67.77 0.00 228564.69 3440.64 251658.24 00:22:46.766 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.766 Job: Nvme3n1 ended in about 0.97 seconds with error 00:22:46.766 Verification LBA range: start 0x0 length 0x400 00:22:46.766 Nvme3n1 : 0.97 198.00 12.37 66.00 0.00 230023.04 20971.52 241172.48 00:22:46.766 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.766 Job: Nvme4n1 ended in about 0.97 seconds with error 00:22:46.766 Verification LBA range: start 0x0 length 0x400 00:22:46.766 Nvme4n1 : 0.97 197.50 12.34 65.83 0.00 225785.60 16930.13 251658.24 00:22:46.766 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.766 Job: Nvme5n1 ended in about 0.97 seconds with error 00:22:46.766 Verification LBA range: start 0x0 length 0x400 00:22:46.766 Nvme5n1 : 0.97 131.33 8.21 65.67 0.00 295532.09 22282.24 256901.12 00:22:46.766 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.766 Job: Nvme6n1 ended in about 0.96 seconds with error 00:22:46.766 Verification LBA range: start 0x0 length 0x400 00:22:46.766 Nvme6n1 : 0.96 153.48 9.59 45.63 0.00 284067.56 26323.63 255153.49 00:22:46.766 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.766 Verification LBA range: start 0x0 length 0x400 00:22:46.766 Nvme7n1 : 0.96 266.20 16.64 0.00 0.00 208527.79 14527.15 249910.61 00:22:46.766 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.766 Verification LBA range: start 0x0 length 0x400 00:22:46.766 Nvme8n1 : 0.96 266.72 16.67 0.00 0.00 203290.03 35826.35 232434.35 00:22:46.766 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.766 Verification LBA range: start 0x0 length 0x400 00:22:46.766 Nvme9n1 : 0.95 201.63 12.60 0.00 0.00 262161.07 21408.43 255153.49 00:22:46.766 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:46.766 Job: Nvme10n1 ended in about 0.98 seconds with error 00:22:46.766 Verification LBA range: start 0x0 length 0x400 00:22:46.766 Nvme10n1 : 0.98 131.00 8.19 65.50 0.00 264025.03 17476.27 272629.76 00:22:46.766 [2024-11-04T11:27:21.336Z] =================================================================================================================== 00:22:46.766 [2024-11-04T11:27:21.336Z] Total : 1953.10 122.07 444.38 0.00 239800.64 3440.64 272629.76 00:22:46.766 [2024-11-04 12:27:21.207724] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:46.766 [2024-11-04 12:27:21.207779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:46.766 [2024-11-04 12:27:21.207891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11045e0 (9): Bad file descriptor 00:22:46.766 [2024-11-04 12:27:21.207920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d3cf0 (9): Bad file descriptor 00:22:46.766 [2024-11-04 12:27:21.207938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb0270 (9): Bad file descriptor 00:22:46.766 [2024-11-04 12:27:21.208464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.766 [2024-11-04 12:27:21.208486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb07f0 with addr=10.0.0.2, port=4420 00:22:46.766 [2024-11-04 12:27:21.208496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb07f0 is same with the state(6) to be set 00:22:46.766 [2024-11-04 12:27:21.208825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.766 [2024-11-04 12:27:21.208838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10dcd10 with addr=10.0.0.2, port=4420 00:22:46.766 [2024-11-04 12:27:21.208846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dcd10 is same with the state(6) to be set 00:22:46.766 [2024-11-04 12:27:21.209194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.766 [2024-11-04 12:27:21.209211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10dd030 with addr=10.0.0.2, port=4420 00:22:46.766 [2024-11-04 12:27:21.209218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dd030 is same with the state(6) to be set 00:22:46.766 [2024-11-04 12:27:21.209504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.766 [2024-11-04 12:27:21.209517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11032a0 with addr=10.0.0.2, port=4420 00:22:46.766 [2024-11-04 12:27:21.209524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11032a0 is same with the state(6) to be set 00:22:46.766 [2024-11-04 12:27:21.209542] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:46.766 [2024-11-04 12:27:21.209554] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:46.766 [2024-11-04 12:27:21.209565] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:46.766 [2024-11-04 12:27:21.210657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:46.766 [2024-11-04 12:27:21.210674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:46.766 [2024-11-04 12:27:21.210683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:46.766 [2024-11-04 12:27:21.210739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb07f0 (9): Bad file descriptor 00:22:46.766 [2024-11-04 12:27:21.210755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10dcd10 (9): Bad file descriptor 00:22:46.766 [2024-11-04 12:27:21.210766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10dd030 (9): Bad file descriptor 00:22:46.766 [2024-11-04 12:27:21.210776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11032a0 (9): Bad file descriptor 00:22:46.766 [2024-11-04 12:27:21.211076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:46.766 [2024-11-04 12:27:21.211091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:46.766 [2024-11-04 12:27:21.211101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:46.766 [2024-11-04 12:27:21.211445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.766 [2024-11-04 12:27:21.211460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcb610 with addr=10.0.0.2, port=4420 00:22:46.766 [2024-11-04 12:27:21.211468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcb610 is same with the state(6) to be set 00:22:46.766 [2024-11-04 12:27:21.211809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.766 [2024-11-04 12:27:21.211821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb2d70 with addr=10.0.0.2, port=4420 00:22:46.766 [2024-11-04 12:27:21.211829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb2d70 is same with the state(6) to be set 00:22:46.766 [2024-11-04 12:27:21.212007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.766 [2024-11-04 12:27:21.212017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca9a50 with addr=10.0.0.2, port=4420 00:22:46.766 [2024-11-04 12:27:21.212025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9a50 is same with the state(6) to be set 00:22:46.766 [2024-11-04 12:27:21.212033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:46.766 [2024-11-04 12:27:21.212041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:46.766 [2024-11-04 12:27:21.212050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:46.766 [2024-11-04 12:27:21.212065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:46.766 [2024-11-04 12:27:21.212072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:46.766 [2024-11-04 12:27:21.212080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:46.766 [2024-11-04 12:27:21.212090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:46.766 [2024-11-04 12:27:21.212097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:46.766 [2024-11-04 12:27:21.212104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:46.767 [2024-11-04 12:27:21.212115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:46.767 [2024-11-04 12:27:21.212122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:46.767 [2024-11-04 12:27:21.212129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:46.767 [2024-11-04 12:27:21.212187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:46.767 [2024-11-04 12:27:21.212197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:46.767 [2024-11-04 12:27:21.212204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:46.767 [2024-11-04 12:27:21.212210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:46.767 [2024-11-04 12:27:21.212472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.767 [2024-11-04 12:27:21.212484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11045e0 with addr=10.0.0.2, port=4420 00:22:46.767 [2024-11-04 12:27:21.212492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11045e0 is same with the state(6) to be set 00:22:46.767 [2024-11-04 12:27:21.212813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.767 [2024-11-04 12:27:21.212825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d3cf0 with addr=10.0.0.2, port=4420 00:22:46.767 [2024-11-04 12:27:21.212833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d3cf0 is same with the state(6) to be set 00:22:46.767 [2024-11-04 12:27:21.213022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.767 [2024-11-04 12:27:21.213033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb0270 with addr=10.0.0.2, port=4420 00:22:46.767 [2024-11-04 12:27:21.213041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb0270 is same with the state(6) to be set 00:22:46.767 [2024-11-04 12:27:21.213052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcb610 (9): Bad file descriptor 00:22:46.767 [2024-11-04 12:27:21.213062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb2d70 (9): Bad file descriptor 00:22:46.767 [2024-11-04 12:27:21.213072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca9a50 (9): Bad file descriptor 00:22:46.767 [2024-11-04 12:27:21.213112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11045e0 (9): Bad file descriptor 00:22:46.767 [2024-11-04 12:27:21.213124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d3cf0 (9): Bad file descriptor 00:22:46.767 [2024-11-04 12:27:21.213133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb0270 (9): Bad file descriptor 00:22:46.767 [2024-11-04 12:27:21.213142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:46.767 [2024-11-04 12:27:21.213149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:46.767 [2024-11-04 12:27:21.213160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:46.767 [2024-11-04 12:27:21.213171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:46.767 [2024-11-04 12:27:21.213178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:46.767 [2024-11-04 12:27:21.213185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:46.767 [2024-11-04 12:27:21.213195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:46.767 [2024-11-04 12:27:21.213202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:46.767 [2024-11-04 12:27:21.213209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:46.767 [2024-11-04 12:27:21.213239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:46.767 [2024-11-04 12:27:21.213248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:46.767 [2024-11-04 12:27:21.213254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:46.767 [2024-11-04 12:27:21.213261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:46.767 [2024-11-04 12:27:21.213269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:46.767 [2024-11-04 12:27:21.213276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:46.767 [2024-11-04 12:27:21.213286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:46.767 [2024-11-04 12:27:21.213293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:46.767 [2024-11-04 12:27:21.213300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:46.767 [2024-11-04 12:27:21.213310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:46.767 [2024-11-04 12:27:21.213317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:46.767 [2024-11-04 12:27:21.213324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:46.767 [2024-11-04 12:27:21.213353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:46.767 [2024-11-04 12:27:21.213361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:46.767 [2024-11-04 12:27:21.213367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:47.029 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1713086 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1713086 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1713086 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.971 rmmod nvme_tcp 00:22:47.971 rmmod nvme_fabrics 00:22:47.971 rmmod nvme_keyring 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 1712853 ']' 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 1712853 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1712853 ']' 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1712853 00:22:47.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1712853) - No such process 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1712853 is not found' 00:22:47.971 Process with pid 1712853 is not found 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.971 12:27:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:50.517 00:22:50.517 real 0m7.800s 00:22:50.517 user 0m18.949s 00:22:50.517 sys 0m1.307s 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.517 ************************************ 00:22:50.517 END TEST nvmf_shutdown_tc3 00:22:50.517 ************************************ 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:50.517 ************************************ 00:22:50.517 START TEST nvmf_shutdown_tc4 00:22:50.517 ************************************ 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.517 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:50.517 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:50.518 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:50.518 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:50.518 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:50.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:22:50.518 00:22:50.518 --- 10.0.0.2 ping statistics --- 00:22:50.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.518 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:22:50.518 12:27:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:22:50.518 00:22:50.518 --- 10.0.0.1 ping statistics --- 00:22:50.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.518 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=1714481 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 1714481 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 1714481 ']' 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:50.518 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:50.779 [2024-11-04 12:27:25.130604] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:22:50.779 [2024-11-04 12:27:25.130675] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.779 [2024-11-04 12:27:25.216999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:50.779 [2024-11-04 12:27:25.252061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.779 [2024-11-04 12:27:25.252095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.779 [2024-11-04 12:27:25.252101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.779 [2024-11-04 12:27:25.252106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.779 [2024-11-04 12:27:25.252114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.779 [2024-11-04 12:27:25.253709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.779 [2024-11-04 12:27:25.253872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.779 [2024-11-04 12:27:25.254002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.779 [2024-11-04 12:27:25.254004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:51.728 [2024-11-04 12:27:25.972743] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.728 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.728 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.728 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.728 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.728 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.728 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.728 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.728 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.728 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.728 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.728 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.728 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.728 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.728 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.728 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.728 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:51.728 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.728 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:51.728 Malloc1 00:22:51.728 [2024-11-04 12:27:26.088328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.728 Malloc2 00:22:51.728 Malloc3 00:22:51.728 Malloc4 00:22:51.728 Malloc5 00:22:51.728 Malloc6 00:22:52.045 Malloc7 00:22:52.045 Malloc8 00:22:52.045 Malloc9 00:22:52.045 Malloc10 00:22:52.045 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.045 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:52.045 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:52.045 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:52.045 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1714862 00:22:52.045 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:52.045 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:52.045 [2024-11-04 12:27:26.536889] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:57.362 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.362 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1714481 00:22:57.362 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1714481 ']' 00:22:57.362 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1714481 00:22:57.362 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:22:57.363 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:57.363 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1714481 00:22:57.363 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:57.363 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:57.363 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1714481' 00:22:57.363 killing process with pid 1714481 00:22:57.363 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 1714481 00:22:57.363 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 1714481 00:22:57.363 [2024-11-04 12:27:31.561679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9432a0 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.561721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9432a0 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.561728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9432a0 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.561733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9432a0 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.561738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9432a0 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.561992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x943770 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.562075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x943c60 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.562100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x943c60 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.562378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x942dd0 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.562399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x942dd0 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.562405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x942dd0 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.562411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x942dd0 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.562416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x942dd0 is same with the state(6) to be set 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 [2024-11-04 12:27:31.562643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa708b0 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.562654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa708b0 is same with the state(6) to be set 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 [2024-11-04 12:27:31.562660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa708b0 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.562665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa708b0 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.562669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa708b0 is same with the state(6) to be set 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 [2024-11-04 12:27:31.562674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa708b0 is same with the state(6) to be set 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 [2024-11-04 12:27:31.562908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa70d80 is same with the state(6) to be set 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 [2024-11-04 12:27:31.562923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa70d80 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.562929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa70d80 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.562934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa70d80 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.562956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.363 starting I/O failed: -6 00:22:57.363 [2024-11-04 12:27:31.563143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71250 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.563158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71250 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.563163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71250 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.563168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71250 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.563174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71250 is same with the state(6) to be set 00:22:57.363 [2024-11-04 12:27:31.563179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa71250 is same with the state(6) to be set 00:22:57.363 starting I/O failed: -6 00:22:57.363 starting I/O failed: -6 00:22:57.363 starting I/O failed: -6 00:22:57.363 starting I/O failed: -6 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 starting I/O failed: -6 00:22:57.363 starting I/O failed: -6 00:22:57.363 starting I/O failed: -6 00:22:57.363 starting I/O failed: -6 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.363 Write completed with error (sct=0, sc=8) 00:22:57.363 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 [2024-11-04 12:27:31.566377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.364 NVMe io qpair process completion error 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.364 [2024-11-04 12:27:31.567481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.364 [2024-11-04 12:27:31.567522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x942a30 is same with the state(6) to be set 00:22:57.364 [2024-11-04 12:27:31.567546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x942a30 is same with the state(6) to be set 00:22:57.364 [2024-11-04 12:27:31.567551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x942a30 is same with the state(6) to be set 00:22:57.364 [2024-11-04 12:27:31.567556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x942a30 is same with the state(6) to be set 00:22:57.364 [2024-11-04 12:27:31.567560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x942a30 is same with the state(6) to be set 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 Write completed with error (sct=0, sc=8) 00:22:57.364 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 [2024-11-04 12:27:31.568364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 [2024-11-04 12:27:31.569263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.365 starting I/O failed: -6 00:22:57.365 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 [2024-11-04 12:27:31.570729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.366 NVMe io qpair process completion error 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 [2024-11-04 12:27:31.572004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 [2024-11-04 12:27:31.572819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.366 Write completed with error (sct=0, sc=8) 00:22:57.366 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 [2024-11-04 12:27:31.573717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 [2024-11-04 12:27:31.575334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.367 NVMe io qpair process completion error 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.367 starting I/O failed: -6 00:22:57.367 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 [2024-11-04 12:27:31.576624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 [2024-11-04 12:27:31.577455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 [2024-11-04 12:27:31.578382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.368 starting I/O failed: -6 00:22:57.368 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 [2024-11-04 12:27:31.580574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.369 NVMe io qpair process completion error 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 [2024-11-04 12:27:31.581866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 [2024-11-04 12:27:31.582673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.369 Write completed with error (sct=0, sc=8) 00:22:57.369 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 [2024-11-04 12:27:31.583613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 [2024-11-04 12:27:31.585278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.370 NVMe io qpair process completion error 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 starting I/O failed: -6 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.370 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 [2024-11-04 12:27:31.586448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 [2024-11-04 12:27:31.587272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 [2024-11-04 12:27:31.588207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.371 Write completed with error (sct=0, sc=8) 00:22:57.371 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 [2024-11-04 12:27:31.590303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.372 NVMe io qpair process completion error 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 [2024-11-04 12:27:31.591406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.372 starting I/O failed: -6 00:22:57.372 starting I/O failed: -6 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.372 starting I/O failed: -6 00:22:57.372 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 [2024-11-04 12:27:31.592365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 [2024-11-04 12:27:31.593290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.373 starting I/O failed: -6 00:22:57.373 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 [2024-11-04 12:27:31.595501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.374 NVMe io qpair process completion error 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 [2024-11-04 12:27:31.596765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 [2024-11-04 12:27:31.597587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.374 starting I/O failed: -6 00:22:57.374 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 [2024-11-04 12:27:31.598516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 [2024-11-04 12:27:31.600153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.375 NVMe io qpair process completion error 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 Write completed with error (sct=0, sc=8) 00:22:57.375 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 [2024-11-04 12:27:31.601399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 [2024-11-04 12:27:31.602216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 [2024-11-04 12:27:31.603150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.376 starting I/O failed: -6 00:22:57.376 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 [2024-11-04 12:27:31.606145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.377 NVMe io qpair process completion error 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 [2024-11-04 12:27:31.607360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 [2024-11-04 12:27:31.608199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 Write completed with error (sct=0, sc=8) 00:22:57.377 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 [2024-11-04 12:27:31.609156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.378 Write completed with error (sct=0, sc=8) 00:22:57.378 starting I/O failed: -6 00:22:57.379 Write completed with error (sct=0, sc=8) 00:22:57.379 starting I/O failed: -6 00:22:57.379 Write completed with error (sct=0, sc=8) 00:22:57.379 starting I/O failed: -6 00:22:57.379 Write completed with error (sct=0, sc=8) 00:22:57.379 starting I/O failed: -6 00:22:57.379 Write completed with error (sct=0, sc=8) 00:22:57.379 starting I/O failed: -6 00:22:57.379 [2024-11-04 12:27:31.610856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.379 NVMe io qpair process completion error 00:22:57.379 Initializing NVMe Controllers 00:22:57.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:57.379 Controller IO queue size 128, less than required. 00:22:57.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:57.379 Controller IO queue size 128, less than required. 00:22:57.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:57.379 Controller IO queue size 128, less than required. 00:22:57.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:57.379 Controller IO queue size 128, less than required. 00:22:57.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:57.379 Controller IO queue size 128, less than required. 00:22:57.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:57.379 Controller IO queue size 128, less than required. 00:22:57.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:57.379 Controller IO queue size 128, less than required. 00:22:57.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:57.379 Controller IO queue size 128, less than required. 00:22:57.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:57.379 Controller IO queue size 128, less than required. 00:22:57.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:57.379 Controller IO queue size 128, less than required. 00:22:57.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:57.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:57.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:57.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:57.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:57.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:57.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:57.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:57.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:57.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:57.379 Initialization complete. Launching workers. 00:22:57.379 ======================================================== 00:22:57.379 Latency(us) 00:22:57.379 Device Information : IOPS MiB/s Average min max 00:22:57.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1875.58 80.59 68265.33 790.87 117995.61 00:22:57.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1905.10 81.86 66546.23 433.92 143825.07 00:22:57.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1873.27 80.49 67689.93 897.01 145433.77 00:22:57.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1891.07 81.26 67072.61 830.89 118237.98 00:22:57.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1886.25 81.05 67266.89 689.19 120250.31 00:22:57.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1924.99 82.71 65943.57 603.00 115875.41 00:22:57.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1895.47 81.45 66990.79 697.97 124059.55 00:22:57.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1885.63 81.02 67368.78 682.62 126120.41 00:22:57.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1905.94 81.90 66681.16 827.21 128418.66 00:22:57.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1887.51 81.10 67354.44 817.47 118473.90 00:22:57.379 ======================================================== 00:22:57.379 Total : 18930.80 813.43 67113.44 433.92 145433.77 00:22:57.379 00:22:57.379 [2024-11-04 12:27:31.613584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eae60 is same with the state(6) to be set 00:22:57.379 [2024-11-04 12:27:31.613628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e69d0 is same with the state(6) to be set 00:22:57.379 [2024-11-04 12:27:31.613658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4c90 is same with the state(6) to be set 00:22:57.379 [2024-11-04 12:27:31.613686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4960 is same with the state(6) to be set 00:22:57.379 [2024-11-04 12:27:31.613717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4fc0 is same with the state(6) to be set 00:22:57.379 [2024-11-04 12:27:31.613762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4630 is same with the state(6) to be set 00:22:57.379 [2024-11-04 12:27:31.613794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb4c0 is same with the state(6) to be set 00:22:57.379 [2024-11-04 12:27:31.613823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e67f0 is same with the state(6) to be set 00:22:57.379 [2024-11-04 12:27:31.613851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb190 is same with the state(6) to be set 00:22:57.379 [2024-11-04 12:27:31.613879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e6bb0 is same with the state(6) to be set 00:22:57.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:57.379 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1714862 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1714862 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1714862 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:58.320 rmmod nvme_tcp 00:22:58.320 rmmod nvme_fabrics 00:22:58.320 rmmod nvme_keyring 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 1714481 ']' 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 1714481 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1714481 ']' 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1714481 00:22:58.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1714481) - No such process 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1714481 is not found' 00:22:58.320 Process with pid 1714481 is not found 00:22:58.320 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:58.321 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:58.321 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:58.321 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:58.321 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:22:58.321 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:58.321 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:22:58.582 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:58.582 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:58.582 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.582 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:58.582 12:27:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.496 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:00.496 00:23:00.496 real 0m10.281s 00:23:00.496 user 0m27.975s 00:23:00.496 sys 0m4.034s 00:23:00.496 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:00.496 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:00.496 ************************************ 00:23:00.496 END TEST nvmf_shutdown_tc4 00:23:00.496 ************************************ 00:23:00.496 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:00.496 00:23:00.496 real 0m43.318s 00:23:00.496 user 1m45.386s 00:23:00.496 sys 0m13.722s 00:23:00.496 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:00.496 12:27:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:00.496 ************************************ 00:23:00.496 END TEST nvmf_shutdown 00:23:00.496 ************************************ 00:23:00.496 12:27:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:23:00.496 00:23:00.496 real 12m40.994s 00:23:00.496 user 26m57.913s 00:23:00.496 sys 3m44.208s 00:23:00.496 12:27:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:00.496 12:27:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:00.496 ************************************ 00:23:00.496 END TEST nvmf_target_extra 00:23:00.496 ************************************ 00:23:00.757 12:27:35 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:00.757 12:27:35 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:00.757 12:27:35 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:00.757 12:27:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:00.757 ************************************ 00:23:00.757 START TEST nvmf_host 00:23:00.757 ************************************ 00:23:00.757 12:27:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:00.757 * Looking for test storage... 00:23:00.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:00.757 12:27:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:00.757 12:27:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:00.757 12:27:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:01.019 12:27:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:01.019 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:01.019 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:01.019 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:01.019 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:01.019 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:01.019 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:01.019 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:01.019 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:01.019 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:01.019 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:01.019 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:01.019 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:01.019 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:01.019 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:01.019 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:01.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.020 --rc genhtml_branch_coverage=1 00:23:01.020 --rc genhtml_function_coverage=1 00:23:01.020 --rc genhtml_legend=1 00:23:01.020 --rc geninfo_all_blocks=1 00:23:01.020 --rc geninfo_unexecuted_blocks=1 00:23:01.020 00:23:01.020 ' 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:01.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.020 --rc genhtml_branch_coverage=1 00:23:01.020 --rc genhtml_function_coverage=1 00:23:01.020 --rc genhtml_legend=1 00:23:01.020 --rc geninfo_all_blocks=1 00:23:01.020 --rc geninfo_unexecuted_blocks=1 00:23:01.020 00:23:01.020 ' 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:01.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.020 --rc genhtml_branch_coverage=1 00:23:01.020 --rc genhtml_function_coverage=1 00:23:01.020 --rc genhtml_legend=1 00:23:01.020 --rc geninfo_all_blocks=1 00:23:01.020 --rc geninfo_unexecuted_blocks=1 00:23:01.020 00:23:01.020 ' 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:01.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.020 --rc genhtml_branch_coverage=1 00:23:01.020 --rc genhtml_function_coverage=1 00:23:01.020 --rc genhtml_legend=1 00:23:01.020 --rc geninfo_all_blocks=1 00:23:01.020 --rc geninfo_unexecuted_blocks=1 00:23:01.020 00:23:01.020 ' 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:01.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.020 ************************************ 00:23:01.020 START TEST nvmf_multicontroller 00:23:01.020 ************************************ 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:01.020 * Looking for test storage... 00:23:01.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:23:01.020 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:01.283 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:01.283 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:01.283 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:01.283 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:01.283 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:01.283 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:01.283 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:01.283 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:01.283 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:01.283 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:01.283 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:01.283 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:01.283 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:01.283 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:01.283 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:01.283 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:01.283 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:01.283 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:01.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.284 --rc genhtml_branch_coverage=1 00:23:01.284 --rc genhtml_function_coverage=1 00:23:01.284 --rc genhtml_legend=1 00:23:01.284 --rc geninfo_all_blocks=1 00:23:01.284 --rc geninfo_unexecuted_blocks=1 00:23:01.284 00:23:01.284 ' 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:01.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.284 --rc genhtml_branch_coverage=1 00:23:01.284 --rc genhtml_function_coverage=1 00:23:01.284 --rc genhtml_legend=1 00:23:01.284 --rc geninfo_all_blocks=1 00:23:01.284 --rc geninfo_unexecuted_blocks=1 00:23:01.284 00:23:01.284 ' 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:01.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.284 --rc genhtml_branch_coverage=1 00:23:01.284 --rc genhtml_function_coverage=1 00:23:01.284 --rc genhtml_legend=1 00:23:01.284 --rc geninfo_all_blocks=1 00:23:01.284 --rc geninfo_unexecuted_blocks=1 00:23:01.284 00:23:01.284 ' 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:01.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.284 --rc genhtml_branch_coverage=1 00:23:01.284 --rc genhtml_function_coverage=1 00:23:01.284 --rc genhtml_legend=1 00:23:01.284 --rc geninfo_all_blocks=1 00:23:01.284 --rc geninfo_unexecuted_blocks=1 00:23:01.284 00:23:01.284 ' 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:01.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:01.284 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:01.285 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:01.285 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.285 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:01.285 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.285 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:01.285 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:01.285 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:01.285 12:27:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:09.431 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:09.431 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:09.431 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:09.431 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:09.431 12:27:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:09.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:23:09.431 00:23:09.431 --- 10.0.0.2 ping statistics --- 00:23:09.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.431 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:23:09.431 00:23:09.431 --- 10.0.0.1 ping statistics --- 00:23:09.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.431 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=1720278 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 1720278 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1720278 ']' 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:09.431 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.431 [2024-11-04 12:27:43.134996] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:23:09.431 [2024-11-04 12:27:43.135067] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.431 [2024-11-04 12:27:43.222990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:09.431 [2024-11-04 12:27:43.275219] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.431 [2024-11-04 12:27:43.275273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.431 [2024-11-04 12:27:43.275282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.431 [2024-11-04 12:27:43.275290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.431 [2024-11-04 12:27:43.275297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.432 [2024-11-04 12:27:43.277337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.432 [2024-11-04 12:27:43.277504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.432 [2024-11-04 12:27:43.277505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:09.432 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.432 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:09.432 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:09.432 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:09.432 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.432 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.432 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:09.432 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.432 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.432 [2024-11-04 12:27:43.972631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.432 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.432 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:09.432 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.432 12:27:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.692 Malloc0 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.692 [2024-11-04 12:27:44.041817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.692 [2024-11-04 12:27:44.053761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.692 Malloc1 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1720627 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1720627 /var/tmp/bdevperf.sock 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1720627 ']' 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:09.692 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.633 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:10.633 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:10.633 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:10.633 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.633 12:27:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.894 NVMe0n1 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.894 1 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.894 request: 00:23:10.894 { 00:23:10.894 "name": "NVMe0", 00:23:10.894 "trtype": "tcp", 00:23:10.894 "traddr": "10.0.0.2", 00:23:10.894 "adrfam": "ipv4", 00:23:10.894 "trsvcid": "4420", 00:23:10.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.894 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:10.894 "hostaddr": "10.0.0.1", 00:23:10.894 "prchk_reftag": false, 00:23:10.894 "prchk_guard": false, 00:23:10.894 "hdgst": false, 00:23:10.894 "ddgst": false, 00:23:10.894 "allow_unrecognized_csi": false, 00:23:10.894 "method": "bdev_nvme_attach_controller", 00:23:10.894 "req_id": 1 00:23:10.894 } 00:23:10.894 Got JSON-RPC error response 00:23:10.894 response: 00:23:10.894 { 00:23:10.894 "code": -114, 00:23:10.894 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:10.894 } 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.894 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.895 request: 00:23:10.895 { 00:23:10.895 "name": "NVMe0", 00:23:10.895 "trtype": "tcp", 00:23:10.895 "traddr": "10.0.0.2", 00:23:10.895 "adrfam": "ipv4", 00:23:10.895 "trsvcid": "4420", 00:23:10.895 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:10.895 "hostaddr": "10.0.0.1", 00:23:10.895 "prchk_reftag": false, 00:23:10.895 "prchk_guard": false, 00:23:10.895 "hdgst": false, 00:23:10.895 "ddgst": false, 00:23:10.895 "allow_unrecognized_csi": false, 00:23:10.895 "method": "bdev_nvme_attach_controller", 00:23:10.895 "req_id": 1 00:23:10.895 } 00:23:10.895 Got JSON-RPC error response 00:23:10.895 response: 00:23:10.895 { 00:23:10.895 "code": -114, 00:23:10.895 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:10.895 } 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.895 request: 00:23:10.895 { 00:23:10.895 "name": "NVMe0", 00:23:10.895 "trtype": "tcp", 00:23:10.895 "traddr": "10.0.0.2", 00:23:10.895 "adrfam": "ipv4", 00:23:10.895 "trsvcid": "4420", 00:23:10.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.895 "hostaddr": "10.0.0.1", 00:23:10.895 "prchk_reftag": false, 00:23:10.895 "prchk_guard": false, 00:23:10.895 "hdgst": false, 00:23:10.895 "ddgst": false, 00:23:10.895 "multipath": "disable", 00:23:10.895 "allow_unrecognized_csi": false, 00:23:10.895 "method": "bdev_nvme_attach_controller", 00:23:10.895 "req_id": 1 00:23:10.895 } 00:23:10.895 Got JSON-RPC error response 00:23:10.895 response: 00:23:10.895 { 00:23:10.895 "code": -114, 00:23:10.895 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:10.895 } 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.895 request: 00:23:10.895 { 00:23:10.895 "name": "NVMe0", 00:23:10.895 "trtype": "tcp", 00:23:10.895 "traddr": "10.0.0.2", 00:23:10.895 "adrfam": "ipv4", 00:23:10.895 "trsvcid": "4420", 00:23:10.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.895 "hostaddr": "10.0.0.1", 00:23:10.895 "prchk_reftag": false, 00:23:10.895 "prchk_guard": false, 00:23:10.895 "hdgst": false, 00:23:10.895 "ddgst": false, 00:23:10.895 "multipath": "failover", 00:23:10.895 "allow_unrecognized_csi": false, 00:23:10.895 "method": "bdev_nvme_attach_controller", 00:23:10.895 "req_id": 1 00:23:10.895 } 00:23:10.895 Got JSON-RPC error response 00:23:10.895 response: 00:23:10.895 { 00:23:10.895 "code": -114, 00:23:10.895 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:10.895 } 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.895 NVMe0n1 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.895 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.157 00:23:11.157 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.157 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:11.157 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:11.157 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.157 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.157 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.157 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:11.157 12:27:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:12.540 { 00:23:12.540 "results": [ 00:23:12.540 { 00:23:12.540 "job": "NVMe0n1", 00:23:12.540 "core_mask": "0x1", 00:23:12.540 "workload": "write", 00:23:12.540 "status": "finished", 00:23:12.540 "queue_depth": 128, 00:23:12.540 "io_size": 4096, 00:23:12.540 "runtime": 1.004645, 00:23:12.540 "iops": 28906.728247291332, 00:23:12.540 "mibps": 112.91690721598177, 00:23:12.540 "io_failed": 0, 00:23:12.540 "io_timeout": 0, 00:23:12.540 "avg_latency_us": 4419.560869116077, 00:23:12.540 "min_latency_us": 2102.6133333333332, 00:23:12.540 "max_latency_us": 10868.053333333333 00:23:12.540 } 00:23:12.540 ], 00:23:12.540 "core_count": 1 00:23:12.540 } 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1720627 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1720627 ']' 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1720627 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1720627 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1720627' 00:23:12.540 killing process with pid 1720627 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1720627 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1720627 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:23:12.540 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:12.540 [2024-11-04 12:27:44.184668] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:23:12.540 [2024-11-04 12:27:44.184728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1720627 ] 00:23:12.540 [2024-11-04 12:27:44.245172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.540 [2024-11-04 12:27:44.281049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.540 [2024-11-04 12:27:45.554073] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name b67fa22c-8a66-4aaf-bdd1-e095e1dae5f4 already exists 00:23:12.540 [2024-11-04 12:27:45.554106] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:b67fa22c-8a66-4aaf-bdd1-e095e1dae5f4 alias for bdev NVMe1n1 00:23:12.540 [2024-11-04 12:27:45.554115] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:12.540 Running I/O for 1 seconds... 00:23:12.540 28913.00 IOPS, 112.94 MiB/s 00:23:12.540 Latency(us) 00:23:12.540 [2024-11-04T11:27:47.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.540 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:12.540 NVMe0n1 : 1.00 28906.73 112.92 0.00 0.00 4419.56 2102.61 10868.05 00:23:12.540 [2024-11-04T11:27:47.110Z] =================================================================================================================== 00:23:12.540 [2024-11-04T11:27:47.110Z] Total : 28906.73 112.92 0.00 0.00 4419.56 2102.61 10868.05 00:23:12.540 Received shutdown signal, test time was about 1.000000 seconds 00:23:12.540 00:23:12.540 Latency(us) 00:23:12.540 [2024-11-04T11:27:47.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.540 [2024-11-04T11:27:47.110Z] =================================================================================================================== 00:23:12.540 [2024-11-04T11:27:47.110Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:12.540 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:12.540 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:12.541 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:12.541 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:12.541 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:12.541 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:12.541 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:12.541 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:12.541 12:27:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:12.541 rmmod nvme_tcp 00:23:12.541 rmmod nvme_fabrics 00:23:12.541 rmmod nvme_keyring 00:23:12.541 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:12.541 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:12.541 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:12.541 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 1720278 ']' 00:23:12.541 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 1720278 00:23:12.541 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1720278 ']' 00:23:12.541 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1720278 00:23:12.541 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:12.541 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:12.541 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1720278 00:23:12.541 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:12.541 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:12.541 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1720278' 00:23:12.541 killing process with pid 1720278 00:23:12.541 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1720278 00:23:12.541 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1720278 00:23:12.801 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:12.801 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:12.801 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:12.801 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:12.801 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:23:12.801 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:12.801 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:23:12.801 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:12.801 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:12.801 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.801 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.801 12:27:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:15.354 00:23:15.354 real 0m13.883s 00:23:15.354 user 0m17.109s 00:23:15.354 sys 0m6.364s 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.354 ************************************ 00:23:15.354 END TEST nvmf_multicontroller 00:23:15.354 ************************************ 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.354 ************************************ 00:23:15.354 START TEST nvmf_aer 00:23:15.354 ************************************ 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:15.354 * Looking for test storage... 00:23:15.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:15.354 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:15.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.355 --rc genhtml_branch_coverage=1 00:23:15.355 --rc genhtml_function_coverage=1 00:23:15.355 --rc genhtml_legend=1 00:23:15.355 --rc geninfo_all_blocks=1 00:23:15.355 --rc geninfo_unexecuted_blocks=1 00:23:15.355 00:23:15.355 ' 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:15.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.355 --rc genhtml_branch_coverage=1 00:23:15.355 --rc genhtml_function_coverage=1 00:23:15.355 --rc genhtml_legend=1 00:23:15.355 --rc geninfo_all_blocks=1 00:23:15.355 --rc geninfo_unexecuted_blocks=1 00:23:15.355 00:23:15.355 ' 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:15.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.355 --rc genhtml_branch_coverage=1 00:23:15.355 --rc genhtml_function_coverage=1 00:23:15.355 --rc genhtml_legend=1 00:23:15.355 --rc geninfo_all_blocks=1 00:23:15.355 --rc geninfo_unexecuted_blocks=1 00:23:15.355 00:23:15.355 ' 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:15.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.355 --rc genhtml_branch_coverage=1 00:23:15.355 --rc genhtml_function_coverage=1 00:23:15.355 --rc genhtml_legend=1 00:23:15.355 --rc geninfo_all_blocks=1 00:23:15.355 --rc geninfo_unexecuted_blocks=1 00:23:15.355 00:23:15.355 ' 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:15.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:15.355 12:27:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:21.941 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:21.941 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:21.941 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:21.941 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:21.942 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:21.942 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:22.202 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:22.202 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:22.202 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:22.202 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:22.202 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:22.202 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:22.202 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:22.202 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:22.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:23:22.202 00:23:22.202 --- 10.0.0.2 ping statistics --- 00:23:22.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.202 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:23:22.202 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:22.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:23:22.202 00:23:22.202 --- 10.0.0.1 ping statistics --- 00:23:22.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.202 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=1725305 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 1725305 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1725305 ']' 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:22.463 12:27:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:22.463 [2024-11-04 12:27:56.882766] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:23:22.463 [2024-11-04 12:27:56.882833] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.463 [2024-11-04 12:27:56.954971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:22.463 [2024-11-04 12:27:56.997971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.463 [2024-11-04 12:27:56.998015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.463 [2024-11-04 12:27:56.998023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.463 [2024-11-04 12:27:56.998031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.463 [2024-11-04 12:27:56.998037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.463 [2024-11-04 12:27:56.999646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.463 [2024-11-04 12:27:56.999773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.463 [2024-11-04 12:27:56.999930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:22.463 [2024-11-04 12:27:57.000101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:23.406 [2024-11-04 12:27:57.739241] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:23.406 Malloc0 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:23.406 [2024-11-04 12:27:57.807031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.406 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:23.406 [ 00:23:23.406 { 00:23:23.407 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:23.407 "subtype": "Discovery", 00:23:23.407 "listen_addresses": [], 00:23:23.407 "allow_any_host": true, 00:23:23.407 "hosts": [] 00:23:23.407 }, 00:23:23.407 { 00:23:23.407 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.407 "subtype": "NVMe", 00:23:23.407 "listen_addresses": [ 00:23:23.407 { 00:23:23.407 "trtype": "TCP", 00:23:23.407 "adrfam": "IPv4", 00:23:23.407 "traddr": "10.0.0.2", 00:23:23.407 "trsvcid": "4420" 00:23:23.407 } 00:23:23.407 ], 00:23:23.407 "allow_any_host": true, 00:23:23.407 "hosts": [], 00:23:23.407 "serial_number": "SPDK00000000000001", 00:23:23.407 "model_number": "SPDK bdev Controller", 00:23:23.407 "max_namespaces": 2, 00:23:23.407 "min_cntlid": 1, 00:23:23.407 "max_cntlid": 65519, 00:23:23.407 "namespaces": [ 00:23:23.407 { 00:23:23.407 "nsid": 1, 00:23:23.407 "bdev_name": "Malloc0", 00:23:23.407 "name": "Malloc0", 00:23:23.407 "nguid": "D69E0BC0F3FC47FCB2FFCD803F5E879D", 00:23:23.407 "uuid": "d69e0bc0-f3fc-47fc-b2ff-cd803f5e879d" 00:23:23.407 } 00:23:23.407 ] 00:23:23.407 } 00:23:23.407 ] 00:23:23.407 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.407 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:23.407 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:23.407 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1725571 00:23:23.407 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:23.407 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:23.407 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:23.407 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:23.407 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:23.407 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:23.407 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:23.407 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:23.407 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:23.407 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:23.407 12:27:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:23.668 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:23.668 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:23.668 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:23.668 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:23.668 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.668 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:23.668 Malloc1 00:23:23.668 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.668 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:23.668 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.668 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:23.668 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.668 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:23.668 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.668 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:23.668 Asynchronous Event Request test 00:23:23.668 Attaching to 10.0.0.2 00:23:23.668 Attached to 10.0.0.2 00:23:23.668 Registering asynchronous event callbacks... 00:23:23.668 Starting namespace attribute notice tests for all controllers... 00:23:23.668 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:23.669 aer_cb - Changed Namespace 00:23:23.669 Cleaning up... 00:23:23.669 [ 00:23:23.669 { 00:23:23.669 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:23.669 "subtype": "Discovery", 00:23:23.669 "listen_addresses": [], 00:23:23.669 "allow_any_host": true, 00:23:23.669 "hosts": [] 00:23:23.669 }, 00:23:23.669 { 00:23:23.669 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.669 "subtype": "NVMe", 00:23:23.669 "listen_addresses": [ 00:23:23.669 { 00:23:23.669 "trtype": "TCP", 00:23:23.669 "adrfam": "IPv4", 00:23:23.669 "traddr": "10.0.0.2", 00:23:23.669 "trsvcid": "4420" 00:23:23.669 } 00:23:23.669 ], 00:23:23.669 "allow_any_host": true, 00:23:23.669 "hosts": [], 00:23:23.669 "serial_number": "SPDK00000000000001", 00:23:23.669 "model_number": "SPDK bdev Controller", 00:23:23.669 "max_namespaces": 2, 00:23:23.669 "min_cntlid": 1, 00:23:23.669 "max_cntlid": 65519, 00:23:23.669 "namespaces": [ 00:23:23.669 { 00:23:23.669 "nsid": 1, 00:23:23.669 "bdev_name": "Malloc0", 00:23:23.669 "name": "Malloc0", 00:23:23.669 "nguid": "D69E0BC0F3FC47FCB2FFCD803F5E879D", 00:23:23.669 "uuid": "d69e0bc0-f3fc-47fc-b2ff-cd803f5e879d" 00:23:23.669 }, 00:23:23.669 { 00:23:23.669 "nsid": 2, 00:23:23.669 "bdev_name": "Malloc1", 00:23:23.669 "name": "Malloc1", 00:23:23.669 "nguid": "1B6A62E90BA3403A9D4928136B363521", 00:23:23.669 "uuid": "1b6a62e9-0ba3-403a-9d49-28136b363521" 00:23:23.669 } 00:23:23.669 ] 00:23:23.669 } 00:23:23.669 ] 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1725571 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:23.669 rmmod nvme_tcp 00:23:23.669 rmmod nvme_fabrics 00:23:23.669 rmmod nvme_keyring 00:23:23.669 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 1725305 ']' 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 1725305 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1725305 ']' 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1725305 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1725305 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1725305' 00:23:23.930 killing process with pid 1725305 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1725305 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1725305 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.930 12:27:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:26.478 00:23:26.478 real 0m11.131s 00:23:26.478 user 0m7.828s 00:23:26.478 sys 0m5.965s 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:26.478 ************************************ 00:23:26.478 END TEST nvmf_aer 00:23:26.478 ************************************ 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.478 ************************************ 00:23:26.478 START TEST nvmf_async_init 00:23:26.478 ************************************ 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:26.478 * Looking for test storage... 00:23:26.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:26.478 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:26.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.479 --rc genhtml_branch_coverage=1 00:23:26.479 --rc genhtml_function_coverage=1 00:23:26.479 --rc genhtml_legend=1 00:23:26.479 --rc geninfo_all_blocks=1 00:23:26.479 --rc geninfo_unexecuted_blocks=1 00:23:26.479 00:23:26.479 ' 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:26.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.479 --rc genhtml_branch_coverage=1 00:23:26.479 --rc genhtml_function_coverage=1 00:23:26.479 --rc genhtml_legend=1 00:23:26.479 --rc geninfo_all_blocks=1 00:23:26.479 --rc geninfo_unexecuted_blocks=1 00:23:26.479 00:23:26.479 ' 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:26.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.479 --rc genhtml_branch_coverage=1 00:23:26.479 --rc genhtml_function_coverage=1 00:23:26.479 --rc genhtml_legend=1 00:23:26.479 --rc geninfo_all_blocks=1 00:23:26.479 --rc geninfo_unexecuted_blocks=1 00:23:26.479 00:23:26.479 ' 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:26.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.479 --rc genhtml_branch_coverage=1 00:23:26.479 --rc genhtml_function_coverage=1 00:23:26.479 --rc genhtml_legend=1 00:23:26.479 --rc geninfo_all_blocks=1 00:23:26.479 --rc geninfo_unexecuted_blocks=1 00:23:26.479 00:23:26.479 ' 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:26.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6c430f0d55be484682e280a8a2010fe3 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:26.479 12:28:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:33.071 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:33.071 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:33.071 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:33.071 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:33.071 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:33.072 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.072 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.072 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.072 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:33.072 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.072 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.072 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:33.072 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:33.072 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.072 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.072 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:33.072 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:33.072 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.072 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.072 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.072 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.072 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:33.072 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:33.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:23:33.333 00:23:33.333 --- 10.0.0.2 ping statistics --- 00:23:33.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.333 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:23:33.333 00:23:33.333 --- 10.0.0.1 ping statistics --- 00:23:33.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.333 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=1729663 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 1729663 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1729663 ']' 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:33.333 12:28:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:33.333 [2024-11-04 12:28:07.860345] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:23:33.333 [2024-11-04 12:28:07.860414] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.597 [2024-11-04 12:28:07.931886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.597 [2024-11-04 12:28:07.973586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.597 [2024-11-04 12:28:07.973626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.597 [2024-11-04 12:28:07.973635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.597 [2024-11-04 12:28:07.973642] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.597 [2024-11-04 12:28:07.973647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.597 [2024-11-04 12:28:07.974246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.170 [2024-11-04 12:28:08.711908] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.170 null0 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.170 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.430 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.430 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6c430f0d55be484682e280a8a2010fe3 00:23:34.430 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.430 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.430 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.430 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:34.430 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.430 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.430 [2024-11-04 12:28:08.752138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.430 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.430 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:34.430 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.430 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.430 nvme0n1 00:23:34.430 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.430 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:34.430 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.430 12:28:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.430 [ 00:23:34.430 { 00:23:34.430 "name": "nvme0n1", 00:23:34.430 "aliases": [ 00:23:34.430 "6c430f0d-55be-4846-82e2-80a8a2010fe3" 00:23:34.430 ], 00:23:34.430 "product_name": "NVMe disk", 00:23:34.430 "block_size": 512, 00:23:34.430 "num_blocks": 2097152, 00:23:34.430 "uuid": "6c430f0d-55be-4846-82e2-80a8a2010fe3", 00:23:34.430 "numa_id": 0, 00:23:34.430 "assigned_rate_limits": { 00:23:34.430 "rw_ios_per_sec": 0, 00:23:34.430 "rw_mbytes_per_sec": 0, 00:23:34.430 "r_mbytes_per_sec": 0, 00:23:34.430 "w_mbytes_per_sec": 0 00:23:34.430 }, 00:23:34.430 "claimed": false, 00:23:34.430 "zoned": false, 00:23:34.430 "supported_io_types": { 00:23:34.430 "read": true, 00:23:34.430 "write": true, 00:23:34.430 "unmap": false, 00:23:34.430 "flush": true, 00:23:34.430 "reset": true, 00:23:34.430 "nvme_admin": true, 00:23:34.430 "nvme_io": true, 00:23:34.430 "nvme_io_md": false, 00:23:34.430 "write_zeroes": true, 00:23:34.430 "zcopy": false, 00:23:34.430 "get_zone_info": false, 00:23:34.430 "zone_management": false, 00:23:34.430 "zone_append": false, 00:23:34.430 "compare": true, 00:23:34.430 "compare_and_write": true, 00:23:34.430 "abort": true, 00:23:34.430 "seek_hole": false, 00:23:34.691 "seek_data": false, 00:23:34.691 "copy": true, 00:23:34.691 "nvme_iov_md": false 00:23:34.691 }, 00:23:34.691 "memory_domains": [ 00:23:34.691 { 00:23:34.691 "dma_device_id": "system", 00:23:34.691 "dma_device_type": 1 00:23:34.691 } 00:23:34.691 ], 00:23:34.691 "driver_specific": { 00:23:34.692 "nvme": [ 00:23:34.692 { 00:23:34.692 "trid": { 00:23:34.692 "trtype": "TCP", 00:23:34.692 "adrfam": "IPv4", 00:23:34.692 "traddr": "10.0.0.2", 00:23:34.692 "trsvcid": "4420", 00:23:34.692 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:34.692 }, 00:23:34.692 "ctrlr_data": { 00:23:34.692 "cntlid": 1, 00:23:34.692 "vendor_id": "0x8086", 00:23:34.692 "model_number": "SPDK bdev Controller", 00:23:34.692 "serial_number": "00000000000000000000", 00:23:34.692 "firmware_revision": "25.01", 00:23:34.692 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:34.692 "oacs": { 00:23:34.692 "security": 0, 00:23:34.692 "format": 0, 00:23:34.692 "firmware": 0, 00:23:34.692 "ns_manage": 0 00:23:34.692 }, 00:23:34.692 "multi_ctrlr": true, 00:23:34.692 "ana_reporting": false 00:23:34.692 }, 00:23:34.692 "vs": { 00:23:34.692 "nvme_version": "1.3" 00:23:34.692 }, 00:23:34.692 "ns_data": { 00:23:34.692 "id": 1, 00:23:34.692 "can_share": true 00:23:34.692 } 00:23:34.692 } 00:23:34.692 ], 00:23:34.692 "mp_policy": "active_passive" 00:23:34.692 } 00:23:34.692 } 00:23:34.692 ] 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.692 [2024-11-04 12:28:09.009241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:34.692 [2024-11-04 12:28:09.009302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b0700 (9): Bad file descriptor 00:23:34.692 [2024-11-04 12:28:09.140846] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.692 [ 00:23:34.692 { 00:23:34.692 "name": "nvme0n1", 00:23:34.692 "aliases": [ 00:23:34.692 "6c430f0d-55be-4846-82e2-80a8a2010fe3" 00:23:34.692 ], 00:23:34.692 "product_name": "NVMe disk", 00:23:34.692 "block_size": 512, 00:23:34.692 "num_blocks": 2097152, 00:23:34.692 "uuid": "6c430f0d-55be-4846-82e2-80a8a2010fe3", 00:23:34.692 "numa_id": 0, 00:23:34.692 "assigned_rate_limits": { 00:23:34.692 "rw_ios_per_sec": 0, 00:23:34.692 "rw_mbytes_per_sec": 0, 00:23:34.692 "r_mbytes_per_sec": 0, 00:23:34.692 "w_mbytes_per_sec": 0 00:23:34.692 }, 00:23:34.692 "claimed": false, 00:23:34.692 "zoned": false, 00:23:34.692 "supported_io_types": { 00:23:34.692 "read": true, 00:23:34.692 "write": true, 00:23:34.692 "unmap": false, 00:23:34.692 "flush": true, 00:23:34.692 "reset": true, 00:23:34.692 "nvme_admin": true, 00:23:34.692 "nvme_io": true, 00:23:34.692 "nvme_io_md": false, 00:23:34.692 "write_zeroes": true, 00:23:34.692 "zcopy": false, 00:23:34.692 "get_zone_info": false, 00:23:34.692 "zone_management": false, 00:23:34.692 "zone_append": false, 00:23:34.692 "compare": true, 00:23:34.692 "compare_and_write": true, 00:23:34.692 "abort": true, 00:23:34.692 "seek_hole": false, 00:23:34.692 "seek_data": false, 00:23:34.692 "copy": true, 00:23:34.692 "nvme_iov_md": false 00:23:34.692 }, 00:23:34.692 "memory_domains": [ 00:23:34.692 { 00:23:34.692 "dma_device_id": "system", 00:23:34.692 "dma_device_type": 1 00:23:34.692 } 00:23:34.692 ], 00:23:34.692 "driver_specific": { 00:23:34.692 "nvme": [ 00:23:34.692 { 00:23:34.692 "trid": { 00:23:34.692 "trtype": "TCP", 00:23:34.692 "adrfam": "IPv4", 00:23:34.692 "traddr": "10.0.0.2", 00:23:34.692 "trsvcid": "4420", 00:23:34.692 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:34.692 }, 00:23:34.692 "ctrlr_data": { 00:23:34.692 "cntlid": 2, 00:23:34.692 "vendor_id": "0x8086", 00:23:34.692 "model_number": "SPDK bdev Controller", 00:23:34.692 "serial_number": "00000000000000000000", 00:23:34.692 "firmware_revision": "25.01", 00:23:34.692 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:34.692 "oacs": { 00:23:34.692 "security": 0, 00:23:34.692 "format": 0, 00:23:34.692 "firmware": 0, 00:23:34.692 "ns_manage": 0 00:23:34.692 }, 00:23:34.692 "multi_ctrlr": true, 00:23:34.692 "ana_reporting": false 00:23:34.692 }, 00:23:34.692 "vs": { 00:23:34.692 "nvme_version": "1.3" 00:23:34.692 }, 00:23:34.692 "ns_data": { 00:23:34.692 "id": 1, 00:23:34.692 "can_share": true 00:23:34.692 } 00:23:34.692 } 00:23:34.692 ], 00:23:34.692 "mp_policy": "active_passive" 00:23:34.692 } 00:23:34.692 } 00:23:34.692 ] 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.jMIqBk8C7v 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.jMIqBk8C7v 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.jMIqBk8C7v 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.692 [2024-11-04 12:28:09.209902] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:34.692 [2024-11-04 12:28:09.210013] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.692 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.692 [2024-11-04 12:28:09.225963] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.953 nvme0n1 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.953 [ 00:23:34.953 { 00:23:34.953 "name": "nvme0n1", 00:23:34.953 "aliases": [ 00:23:34.953 "6c430f0d-55be-4846-82e2-80a8a2010fe3" 00:23:34.953 ], 00:23:34.953 "product_name": "NVMe disk", 00:23:34.953 "block_size": 512, 00:23:34.953 "num_blocks": 2097152, 00:23:34.953 "uuid": "6c430f0d-55be-4846-82e2-80a8a2010fe3", 00:23:34.953 "numa_id": 0, 00:23:34.953 "assigned_rate_limits": { 00:23:34.953 "rw_ios_per_sec": 0, 00:23:34.953 "rw_mbytes_per_sec": 0, 00:23:34.953 "r_mbytes_per_sec": 0, 00:23:34.953 "w_mbytes_per_sec": 0 00:23:34.953 }, 00:23:34.953 "claimed": false, 00:23:34.953 "zoned": false, 00:23:34.953 "supported_io_types": { 00:23:34.953 "read": true, 00:23:34.953 "write": true, 00:23:34.953 "unmap": false, 00:23:34.953 "flush": true, 00:23:34.953 "reset": true, 00:23:34.953 "nvme_admin": true, 00:23:34.953 "nvme_io": true, 00:23:34.953 "nvme_io_md": false, 00:23:34.953 "write_zeroes": true, 00:23:34.953 "zcopy": false, 00:23:34.953 "get_zone_info": false, 00:23:34.953 "zone_management": false, 00:23:34.953 "zone_append": false, 00:23:34.953 "compare": true, 00:23:34.953 "compare_and_write": true, 00:23:34.953 "abort": true, 00:23:34.953 "seek_hole": false, 00:23:34.953 "seek_data": false, 00:23:34.953 "copy": true, 00:23:34.953 "nvme_iov_md": false 00:23:34.953 }, 00:23:34.953 "memory_domains": [ 00:23:34.953 { 00:23:34.953 "dma_device_id": "system", 00:23:34.953 "dma_device_type": 1 00:23:34.953 } 00:23:34.953 ], 00:23:34.953 "driver_specific": { 00:23:34.953 "nvme": [ 00:23:34.953 { 00:23:34.953 "trid": { 00:23:34.953 "trtype": "TCP", 00:23:34.953 "adrfam": "IPv4", 00:23:34.953 "traddr": "10.0.0.2", 00:23:34.953 "trsvcid": "4421", 00:23:34.953 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:34.953 }, 00:23:34.953 "ctrlr_data": { 00:23:34.953 "cntlid": 3, 00:23:34.953 "vendor_id": "0x8086", 00:23:34.953 "model_number": "SPDK bdev Controller", 00:23:34.953 "serial_number": "00000000000000000000", 00:23:34.953 "firmware_revision": "25.01", 00:23:34.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:34.953 "oacs": { 00:23:34.953 "security": 0, 00:23:34.953 "format": 0, 00:23:34.953 "firmware": 0, 00:23:34.953 "ns_manage": 0 00:23:34.953 }, 00:23:34.953 "multi_ctrlr": true, 00:23:34.953 "ana_reporting": false 00:23:34.953 }, 00:23:34.953 "vs": { 00:23:34.953 "nvme_version": "1.3" 00:23:34.953 }, 00:23:34.953 "ns_data": { 00:23:34.953 "id": 1, 00:23:34.953 "can_share": true 00:23:34.953 } 00:23:34.953 } 00:23:34.953 ], 00:23:34.953 "mp_policy": "active_passive" 00:23:34.953 } 00:23:34.953 } 00:23:34.953 ] 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.jMIqBk8C7v 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:34.953 rmmod nvme_tcp 00:23:34.953 rmmod nvme_fabrics 00:23:34.953 rmmod nvme_keyring 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 1729663 ']' 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 1729663 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1729663 ']' 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1729663 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1729663 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1729663' 00:23:34.953 killing process with pid 1729663 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1729663 00:23:34.953 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1729663 00:23:35.214 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:35.214 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:35.214 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:35.214 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:35.214 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:23:35.214 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:35.214 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:23:35.214 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:35.214 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:35.214 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.214 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:35.214 12:28:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.128 12:28:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:37.128 00:23:37.128 real 0m11.088s 00:23:37.128 user 0m3.839s 00:23:37.128 sys 0m5.773s 00:23:37.128 12:28:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:37.128 12:28:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:37.128 ************************************ 00:23:37.128 END TEST nvmf_async_init 00:23:37.128 ************************************ 00:23:37.389 12:28:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:37.389 12:28:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:37.389 12:28:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:37.389 12:28:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.389 ************************************ 00:23:37.389 START TEST dma 00:23:37.389 ************************************ 00:23:37.389 12:28:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:37.389 * Looking for test storage... 00:23:37.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:37.389 12:28:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:37.389 12:28:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:23:37.389 12:28:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:37.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.651 --rc genhtml_branch_coverage=1 00:23:37.651 --rc genhtml_function_coverage=1 00:23:37.651 --rc genhtml_legend=1 00:23:37.651 --rc geninfo_all_blocks=1 00:23:37.651 --rc geninfo_unexecuted_blocks=1 00:23:37.651 00:23:37.651 ' 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:37.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.651 --rc genhtml_branch_coverage=1 00:23:37.651 --rc genhtml_function_coverage=1 00:23:37.651 --rc genhtml_legend=1 00:23:37.651 --rc geninfo_all_blocks=1 00:23:37.651 --rc geninfo_unexecuted_blocks=1 00:23:37.651 00:23:37.651 ' 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:37.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.651 --rc genhtml_branch_coverage=1 00:23:37.651 --rc genhtml_function_coverage=1 00:23:37.651 --rc genhtml_legend=1 00:23:37.651 --rc geninfo_all_blocks=1 00:23:37.651 --rc geninfo_unexecuted_blocks=1 00:23:37.651 00:23:37.651 ' 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:37.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.651 --rc genhtml_branch_coverage=1 00:23:37.651 --rc genhtml_function_coverage=1 00:23:37.651 --rc genhtml_legend=1 00:23:37.651 --rc geninfo_all_blocks=1 00:23:37.651 --rc geninfo_unexecuted_blocks=1 00:23:37.651 00:23:37.651 ' 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.651 12:28:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:37.651 12:28:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:37.651 12:28:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:37.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:37.652 00:23:37.652 real 0m0.246s 00:23:37.652 user 0m0.136s 00:23:37.652 sys 0m0.124s 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:37.652 ************************************ 00:23:37.652 END TEST dma 00:23:37.652 ************************************ 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.652 ************************************ 00:23:37.652 START TEST nvmf_identify 00:23:37.652 ************************************ 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:37.652 * Looking for test storage... 00:23:37.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:23:37.652 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:37.913 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:37.913 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:37.913 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:37.913 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:37.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.914 --rc genhtml_branch_coverage=1 00:23:37.914 --rc genhtml_function_coverage=1 00:23:37.914 --rc genhtml_legend=1 00:23:37.914 --rc geninfo_all_blocks=1 00:23:37.914 --rc geninfo_unexecuted_blocks=1 00:23:37.914 00:23:37.914 ' 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:37.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.914 --rc genhtml_branch_coverage=1 00:23:37.914 --rc genhtml_function_coverage=1 00:23:37.914 --rc genhtml_legend=1 00:23:37.914 --rc geninfo_all_blocks=1 00:23:37.914 --rc geninfo_unexecuted_blocks=1 00:23:37.914 00:23:37.914 ' 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:37.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.914 --rc genhtml_branch_coverage=1 00:23:37.914 --rc genhtml_function_coverage=1 00:23:37.914 --rc genhtml_legend=1 00:23:37.914 --rc geninfo_all_blocks=1 00:23:37.914 --rc geninfo_unexecuted_blocks=1 00:23:37.914 00:23:37.914 ' 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:37.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.914 --rc genhtml_branch_coverage=1 00:23:37.914 --rc genhtml_function_coverage=1 00:23:37.914 --rc genhtml_legend=1 00:23:37.914 --rc geninfo_all_blocks=1 00:23:37.914 --rc geninfo_unexecuted_blocks=1 00:23:37.914 00:23:37.914 ' 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:37.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:37.914 12:28:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:46.060 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:46.060 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:46.060 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:46.060 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:46.060 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:46.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:23:46.061 00:23:46.061 --- 10.0.0.2 ping statistics --- 00:23:46.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.061 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:23:46.061 00:23:46.061 --- 10.0.0.1 ping statistics --- 00:23:46.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.061 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1734390 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1734390 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1734390 ']' 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:46.061 12:28:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.061 [2024-11-04 12:28:19.748277] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:23:46.061 [2024-11-04 12:28:19.748347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.061 [2024-11-04 12:28:19.820495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:46.061 [2024-11-04 12:28:19.864555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.061 [2024-11-04 12:28:19.864596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.061 [2024-11-04 12:28:19.864604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.061 [2024-11-04 12:28:19.864614] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.061 [2024-11-04 12:28:19.864620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.061 [2024-11-04 12:28:19.866259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.061 [2024-11-04 12:28:19.866378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.061 [2024-11-04 12:28:19.866537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.061 [2024-11-04 12:28:19.866537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:46.061 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:46.061 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:46.061 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:46.061 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.061 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.061 [2024-11-04 12:28:20.565704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.061 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.061 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:46.061 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:46.061 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.061 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:46.061 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.061 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.325 Malloc0 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.325 [2024-11-04 12:28:20.679937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.325 [ 00:23:46.325 { 00:23:46.325 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:46.325 "subtype": "Discovery", 00:23:46.325 "listen_addresses": [ 00:23:46.325 { 00:23:46.325 "trtype": "TCP", 00:23:46.325 "adrfam": "IPv4", 00:23:46.325 "traddr": "10.0.0.2", 00:23:46.325 "trsvcid": "4420" 00:23:46.325 } 00:23:46.325 ], 00:23:46.325 "allow_any_host": true, 00:23:46.325 "hosts": [] 00:23:46.325 }, 00:23:46.325 { 00:23:46.325 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.325 "subtype": "NVMe", 00:23:46.325 "listen_addresses": [ 00:23:46.325 { 00:23:46.325 "trtype": "TCP", 00:23:46.325 "adrfam": "IPv4", 00:23:46.325 "traddr": "10.0.0.2", 00:23:46.325 "trsvcid": "4420" 00:23:46.325 } 00:23:46.325 ], 00:23:46.325 "allow_any_host": true, 00:23:46.325 "hosts": [], 00:23:46.325 "serial_number": "SPDK00000000000001", 00:23:46.325 "model_number": "SPDK bdev Controller", 00:23:46.325 "max_namespaces": 32, 00:23:46.325 "min_cntlid": 1, 00:23:46.325 "max_cntlid": 65519, 00:23:46.325 "namespaces": [ 00:23:46.325 { 00:23:46.325 "nsid": 1, 00:23:46.325 "bdev_name": "Malloc0", 00:23:46.325 "name": "Malloc0", 00:23:46.325 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:46.325 "eui64": "ABCDEF0123456789", 00:23:46.325 "uuid": "a2af9a56-5af4-4f27-a588-9e88262bcbc0" 00:23:46.325 } 00:23:46.325 ] 00:23:46.325 } 00:23:46.325 ] 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.325 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:46.325 [2024-11-04 12:28:20.743555] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:23:46.325 [2024-11-04 12:28:20.743595] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1734490 ] 00:23:46.325 [2024-11-04 12:28:20.777350] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:46.325 [2024-11-04 12:28:20.777393] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:46.325 [2024-11-04 12:28:20.777398] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:46.325 [2024-11-04 12:28:20.777409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:46.325 [2024-11-04 12:28:20.777417] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:46.325 [2024-11-04 12:28:20.778126] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:46.325 [2024-11-04 12:28:20.778159] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x965760 0 00:23:46.325 [2024-11-04 12:28:20.788758] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:46.325 [2024-11-04 12:28:20.788770] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:46.325 [2024-11-04 12:28:20.788777] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:46.325 [2024-11-04 12:28:20.788781] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:46.325 [2024-11-04 12:28:20.788807] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.325 [2024-11-04 12:28:20.788812] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.325 [2024-11-04 12:28:20.788817] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x965760) 00:23:46.325 [2024-11-04 12:28:20.788829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:46.325 [2024-11-04 12:28:20.788846] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5480, cid 0, qid 0 00:23:46.326 [2024-11-04 12:28:20.795757] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.326 [2024-11-04 12:28:20.795767] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.326 [2024-11-04 12:28:20.795770] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.795775] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5480) on tqpair=0x965760 00:23:46.326 [2024-11-04 12:28:20.795788] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:46.326 [2024-11-04 12:28:20.795795] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:46.326 [2024-11-04 12:28:20.795803] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:46.326 [2024-11-04 12:28:20.795816] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.795821] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.795825] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x965760) 00:23:46.326 [2024-11-04 12:28:20.795832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.326 [2024-11-04 12:28:20.795846] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5480, cid 0, qid 0 00:23:46.326 [2024-11-04 12:28:20.796014] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.326 [2024-11-04 12:28:20.796021] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.326 [2024-11-04 12:28:20.796025] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.796029] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5480) on tqpair=0x965760 00:23:46.326 [2024-11-04 12:28:20.796034] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:46.326 [2024-11-04 12:28:20.796041] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:46.326 [2024-11-04 12:28:20.796048] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.796052] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.796056] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x965760) 00:23:46.326 [2024-11-04 12:28:20.796063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.326 [2024-11-04 12:28:20.796073] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5480, cid 0, qid 0 00:23:46.326 [2024-11-04 12:28:20.796228] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.326 [2024-11-04 12:28:20.796235] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.326 [2024-11-04 12:28:20.796238] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.796242] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5480) on tqpair=0x965760 00:23:46.326 [2024-11-04 12:28:20.796247] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:46.326 [2024-11-04 12:28:20.796255] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:46.326 [2024-11-04 12:28:20.796262] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.796266] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.796269] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x965760) 00:23:46.326 [2024-11-04 12:28:20.796276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.326 [2024-11-04 12:28:20.796286] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5480, cid 0, qid 0 00:23:46.326 [2024-11-04 12:28:20.796445] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.326 [2024-11-04 12:28:20.796452] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.326 [2024-11-04 12:28:20.796455] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.796459] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5480) on tqpair=0x965760 00:23:46.326 [2024-11-04 12:28:20.796464] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:46.326 [2024-11-04 12:28:20.796473] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.796480] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.796484] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x965760) 00:23:46.326 [2024-11-04 12:28:20.796490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.326 [2024-11-04 12:28:20.796501] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5480, cid 0, qid 0 00:23:46.326 [2024-11-04 12:28:20.796712] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.326 [2024-11-04 12:28:20.796719] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.326 [2024-11-04 12:28:20.796722] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.796726] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5480) on tqpair=0x965760 00:23:46.326 [2024-11-04 12:28:20.796731] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:46.326 [2024-11-04 12:28:20.796736] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:46.326 [2024-11-04 12:28:20.796743] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:46.326 [2024-11-04 12:28:20.796858] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:46.326 [2024-11-04 12:28:20.796863] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:46.326 [2024-11-04 12:28:20.796871] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.796875] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.796879] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x965760) 00:23:46.326 [2024-11-04 12:28:20.796885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.326 [2024-11-04 12:28:20.796896] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5480, cid 0, qid 0 00:23:46.326 [2024-11-04 12:28:20.797070] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.326 [2024-11-04 12:28:20.797077] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.326 [2024-11-04 12:28:20.797080] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.797084] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5480) on tqpair=0x965760 00:23:46.326 [2024-11-04 12:28:20.797089] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:46.326 [2024-11-04 12:28:20.797099] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.797103] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.797106] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x965760) 00:23:46.326 [2024-11-04 12:28:20.797113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.326 [2024-11-04 12:28:20.797123] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5480, cid 0, qid 0 00:23:46.326 [2024-11-04 12:28:20.797292] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.326 [2024-11-04 12:28:20.797299] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.326 [2024-11-04 12:28:20.797303] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.797306] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5480) on tqpair=0x965760 00:23:46.326 [2024-11-04 12:28:20.797311] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:46.326 [2024-11-04 12:28:20.797318] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:46.326 [2024-11-04 12:28:20.797326] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:46.326 [2024-11-04 12:28:20.797334] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:46.326 [2024-11-04 12:28:20.797343] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.797347] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x965760) 00:23:46.326 [2024-11-04 12:28:20.797354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.326 [2024-11-04 12:28:20.797364] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5480, cid 0, qid 0 00:23:46.326 [2024-11-04 12:28:20.797564] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.326 [2024-11-04 12:28:20.797571] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.326 [2024-11-04 12:28:20.797575] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.797579] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x965760): datao=0, datal=4096, cccid=0 00:23:46.326 [2024-11-04 12:28:20.797584] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c5480) on tqpair(0x965760): expected_datao=0, payload_size=4096 00:23:46.326 [2024-11-04 12:28:20.797588] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.797596] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.797600] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.797776] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.326 [2024-11-04 12:28:20.797783] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.326 [2024-11-04 12:28:20.797786] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.326 [2024-11-04 12:28:20.797790] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5480) on tqpair=0x965760 00:23:46.326 [2024-11-04 12:28:20.797798] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:46.326 [2024-11-04 12:28:20.797803] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:46.326 [2024-11-04 12:28:20.797807] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:46.326 [2024-11-04 12:28:20.797812] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:46.326 [2024-11-04 12:28:20.797817] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:46.326 [2024-11-04 12:28:20.797822] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:46.326 [2024-11-04 12:28:20.797831] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:46.327 [2024-11-04 12:28:20.797837] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.797841] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.797845] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x965760) 00:23:46.327 [2024-11-04 12:28:20.797852] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:46.327 [2024-11-04 12:28:20.797864] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5480, cid 0, qid 0 00:23:46.327 [2024-11-04 12:28:20.798040] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.327 [2024-11-04 12:28:20.798048] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.327 [2024-11-04 12:28:20.798052] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798056] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5480) on tqpair=0x965760 00:23:46.327 [2024-11-04 12:28:20.798066] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798070] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798074] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x965760) 00:23:46.327 [2024-11-04 12:28:20.798080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.327 [2024-11-04 12:28:20.798086] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798090] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798093] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x965760) 00:23:46.327 [2024-11-04 12:28:20.798099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.327 [2024-11-04 12:28:20.798105] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798109] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798113] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x965760) 00:23:46.327 [2024-11-04 12:28:20.798119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.327 [2024-11-04 12:28:20.798125] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798128] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798132] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x965760) 00:23:46.327 [2024-11-04 12:28:20.798138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.327 [2024-11-04 12:28:20.798143] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:46.327 [2024-11-04 12:28:20.798151] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:46.327 [2024-11-04 12:28:20.798157] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798161] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x965760) 00:23:46.327 [2024-11-04 12:28:20.798168] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.327 [2024-11-04 12:28:20.798180] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5480, cid 0, qid 0 00:23:46.327 [2024-11-04 12:28:20.798185] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5600, cid 1, qid 0 00:23:46.327 [2024-11-04 12:28:20.798190] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5780, cid 2, qid 0 00:23:46.327 [2024-11-04 12:28:20.798195] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5900, cid 3, qid 0 00:23:46.327 [2024-11-04 12:28:20.798199] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5a80, cid 4, qid 0 00:23:46.327 [2024-11-04 12:28:20.798436] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.327 [2024-11-04 12:28:20.798444] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.327 [2024-11-04 12:28:20.798447] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798451] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5a80) on tqpair=0x965760 00:23:46.327 [2024-11-04 12:28:20.798458] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:46.327 [2024-11-04 12:28:20.798465] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:46.327 [2024-11-04 12:28:20.798476] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798480] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x965760) 00:23:46.327 [2024-11-04 12:28:20.798486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.327 [2024-11-04 12:28:20.798497] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5a80, cid 4, qid 0 00:23:46.327 [2024-11-04 12:28:20.798695] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.327 [2024-11-04 12:28:20.798701] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.327 [2024-11-04 12:28:20.798705] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798709] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x965760): datao=0, datal=4096, cccid=4 00:23:46.327 [2024-11-04 12:28:20.798713] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c5a80) on tqpair(0x965760): expected_datao=0, payload_size=4096 00:23:46.327 [2024-11-04 12:28:20.798718] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798734] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798738] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798883] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.327 [2024-11-04 12:28:20.798890] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.327 [2024-11-04 12:28:20.798893] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798897] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5a80) on tqpair=0x965760 00:23:46.327 [2024-11-04 12:28:20.798909] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:46.327 [2024-11-04 12:28:20.798932] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798937] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x965760) 00:23:46.327 [2024-11-04 12:28:20.798943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.327 [2024-11-04 12:28:20.798950] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798954] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.798958] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x965760) 00:23:46.327 [2024-11-04 12:28:20.798964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.327 [2024-11-04 12:28:20.798976] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5a80, cid 4, qid 0 00:23:46.327 [2024-11-04 12:28:20.798981] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5c00, cid 5, qid 0 00:23:46.327 [2024-11-04 12:28:20.799183] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.327 [2024-11-04 12:28:20.799190] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.327 [2024-11-04 12:28:20.799193] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.799197] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x965760): datao=0, datal=1024, cccid=4 00:23:46.327 [2024-11-04 12:28:20.799202] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c5a80) on tqpair(0x965760): expected_datao=0, payload_size=1024 00:23:46.327 [2024-11-04 12:28:20.799206] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.799213] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.799216] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.799224] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.327 [2024-11-04 12:28:20.799230] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.327 [2024-11-04 12:28:20.799234] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.799238] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5c00) on tqpair=0x965760 00:23:46.327 [2024-11-04 12:28:20.839927] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.327 [2024-11-04 12:28:20.839938] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.327 [2024-11-04 12:28:20.839941] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.839945] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5a80) on tqpair=0x965760 00:23:46.327 [2024-11-04 12:28:20.839960] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.839964] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x965760) 00:23:46.327 [2024-11-04 12:28:20.839971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.327 [2024-11-04 12:28:20.839986] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5a80, cid 4, qid 0 00:23:46.327 [2024-11-04 12:28:20.840189] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.327 [2024-11-04 12:28:20.840196] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.327 [2024-11-04 12:28:20.840200] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.840204] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x965760): datao=0, datal=3072, cccid=4 00:23:46.327 [2024-11-04 12:28:20.840208] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c5a80) on tqpair(0x965760): expected_datao=0, payload_size=3072 00:23:46.327 [2024-11-04 12:28:20.840213] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.840219] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.840223] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.840363] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.327 [2024-11-04 12:28:20.840369] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.327 [2024-11-04 12:28:20.840373] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.840377] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5a80) on tqpair=0x965760 00:23:46.327 [2024-11-04 12:28:20.840385] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.327 [2024-11-04 12:28:20.840389] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x965760) 00:23:46.327 [2024-11-04 12:28:20.840395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.327 [2024-11-04 12:28:20.840409] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5a80, cid 4, qid 0 00:23:46.327 [2024-11-04 12:28:20.843756] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.328 [2024-11-04 12:28:20.843764] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.328 [2024-11-04 12:28:20.843768] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.328 [2024-11-04 12:28:20.843771] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x965760): datao=0, datal=8, cccid=4 00:23:46.328 [2024-11-04 12:28:20.843776] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c5a80) on tqpair(0x965760): expected_datao=0, payload_size=8 00:23:46.328 [2024-11-04 12:28:20.843781] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.328 [2024-11-04 12:28:20.843787] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.328 [2024-11-04 12:28:20.843791] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.328 [2024-11-04 12:28:20.881755] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.328 [2024-11-04 12:28:20.881768] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.328 [2024-11-04 12:28:20.881772] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.328 [2024-11-04 12:28:20.881776] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5a80) on tqpair=0x965760 00:23:46.328 ===================================================== 00:23:46.328 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:46.328 ===================================================== 00:23:46.328 Controller Capabilities/Features 00:23:46.328 ================================ 00:23:46.328 Vendor ID: 0000 00:23:46.328 Subsystem Vendor ID: 0000 00:23:46.328 Serial Number: .................... 00:23:46.328 Model Number: ........................................ 00:23:46.328 Firmware Version: 25.01 00:23:46.328 Recommended Arb Burst: 0 00:23:46.328 IEEE OUI Identifier: 00 00 00 00:23:46.328 Multi-path I/O 00:23:46.328 May have multiple subsystem ports: No 00:23:46.328 May have multiple controllers: No 00:23:46.328 Associated with SR-IOV VF: No 00:23:46.328 Max Data Transfer Size: 131072 00:23:46.328 Max Number of Namespaces: 0 00:23:46.328 Max Number of I/O Queues: 1024 00:23:46.328 NVMe Specification Version (VS): 1.3 00:23:46.328 NVMe Specification Version (Identify): 1.3 00:23:46.328 Maximum Queue Entries: 128 00:23:46.328 Contiguous Queues Required: Yes 00:23:46.328 Arbitration Mechanisms Supported 00:23:46.328 Weighted Round Robin: Not Supported 00:23:46.328 Vendor Specific: Not Supported 00:23:46.328 Reset Timeout: 15000 ms 00:23:46.328 Doorbell Stride: 4 bytes 00:23:46.328 NVM Subsystem Reset: Not Supported 00:23:46.328 Command Sets Supported 00:23:46.328 NVM Command Set: Supported 00:23:46.328 Boot Partition: Not Supported 00:23:46.328 Memory Page Size Minimum: 4096 bytes 00:23:46.328 Memory Page Size Maximum: 4096 bytes 00:23:46.328 Persistent Memory Region: Not Supported 00:23:46.328 Optional Asynchronous Events Supported 00:23:46.328 Namespace Attribute Notices: Not Supported 00:23:46.328 Firmware Activation Notices: Not Supported 00:23:46.328 ANA Change Notices: Not Supported 00:23:46.328 PLE Aggregate Log Change Notices: Not Supported 00:23:46.328 LBA Status Info Alert Notices: Not Supported 00:23:46.328 EGE Aggregate Log Change Notices: Not Supported 00:23:46.328 Normal NVM Subsystem Shutdown event: Not Supported 00:23:46.328 Zone Descriptor Change Notices: Not Supported 00:23:46.328 Discovery Log Change Notices: Supported 00:23:46.328 Controller Attributes 00:23:46.328 128-bit Host Identifier: Not Supported 00:23:46.328 Non-Operational Permissive Mode: Not Supported 00:23:46.328 NVM Sets: Not Supported 00:23:46.328 Read Recovery Levels: Not Supported 00:23:46.328 Endurance Groups: Not Supported 00:23:46.328 Predictable Latency Mode: Not Supported 00:23:46.328 Traffic Based Keep ALive: Not Supported 00:23:46.328 Namespace Granularity: Not Supported 00:23:46.328 SQ Associations: Not Supported 00:23:46.328 UUID List: Not Supported 00:23:46.328 Multi-Domain Subsystem: Not Supported 00:23:46.328 Fixed Capacity Management: Not Supported 00:23:46.328 Variable Capacity Management: Not Supported 00:23:46.328 Delete Endurance Group: Not Supported 00:23:46.328 Delete NVM Set: Not Supported 00:23:46.328 Extended LBA Formats Supported: Not Supported 00:23:46.328 Flexible Data Placement Supported: Not Supported 00:23:46.328 00:23:46.328 Controller Memory Buffer Support 00:23:46.328 ================================ 00:23:46.328 Supported: No 00:23:46.328 00:23:46.328 Persistent Memory Region Support 00:23:46.328 ================================ 00:23:46.328 Supported: No 00:23:46.328 00:23:46.328 Admin Command Set Attributes 00:23:46.328 ============================ 00:23:46.328 Security Send/Receive: Not Supported 00:23:46.328 Format NVM: Not Supported 00:23:46.328 Firmware Activate/Download: Not Supported 00:23:46.328 Namespace Management: Not Supported 00:23:46.328 Device Self-Test: Not Supported 00:23:46.328 Directives: Not Supported 00:23:46.328 NVMe-MI: Not Supported 00:23:46.328 Virtualization Management: Not Supported 00:23:46.328 Doorbell Buffer Config: Not Supported 00:23:46.328 Get LBA Status Capability: Not Supported 00:23:46.328 Command & Feature Lockdown Capability: Not Supported 00:23:46.328 Abort Command Limit: 1 00:23:46.328 Async Event Request Limit: 4 00:23:46.328 Number of Firmware Slots: N/A 00:23:46.328 Firmware Slot 1 Read-Only: N/A 00:23:46.328 Firmware Activation Without Reset: N/A 00:23:46.328 Multiple Update Detection Support: N/A 00:23:46.328 Firmware Update Granularity: No Information Provided 00:23:46.328 Per-Namespace SMART Log: No 00:23:46.328 Asymmetric Namespace Access Log Page: Not Supported 00:23:46.328 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:46.328 Command Effects Log Page: Not Supported 00:23:46.328 Get Log Page Extended Data: Supported 00:23:46.328 Telemetry Log Pages: Not Supported 00:23:46.328 Persistent Event Log Pages: Not Supported 00:23:46.328 Supported Log Pages Log Page: May Support 00:23:46.328 Commands Supported & Effects Log Page: Not Supported 00:23:46.328 Feature Identifiers & Effects Log Page:May Support 00:23:46.328 NVMe-MI Commands & Effects Log Page: May Support 00:23:46.328 Data Area 4 for Telemetry Log: Not Supported 00:23:46.328 Error Log Page Entries Supported: 128 00:23:46.328 Keep Alive: Not Supported 00:23:46.328 00:23:46.328 NVM Command Set Attributes 00:23:46.328 ========================== 00:23:46.328 Submission Queue Entry Size 00:23:46.328 Max: 1 00:23:46.328 Min: 1 00:23:46.328 Completion Queue Entry Size 00:23:46.328 Max: 1 00:23:46.328 Min: 1 00:23:46.328 Number of Namespaces: 0 00:23:46.328 Compare Command: Not Supported 00:23:46.328 Write Uncorrectable Command: Not Supported 00:23:46.328 Dataset Management Command: Not Supported 00:23:46.328 Write Zeroes Command: Not Supported 00:23:46.328 Set Features Save Field: Not Supported 00:23:46.328 Reservations: Not Supported 00:23:46.328 Timestamp: Not Supported 00:23:46.328 Copy: Not Supported 00:23:46.328 Volatile Write Cache: Not Present 00:23:46.328 Atomic Write Unit (Normal): 1 00:23:46.328 Atomic Write Unit (PFail): 1 00:23:46.328 Atomic Compare & Write Unit: 1 00:23:46.328 Fused Compare & Write: Supported 00:23:46.328 Scatter-Gather List 00:23:46.328 SGL Command Set: Supported 00:23:46.328 SGL Keyed: Supported 00:23:46.328 SGL Bit Bucket Descriptor: Not Supported 00:23:46.328 SGL Metadata Pointer: Not Supported 00:23:46.328 Oversized SGL: Not Supported 00:23:46.328 SGL Metadata Address: Not Supported 00:23:46.328 SGL Offset: Supported 00:23:46.328 Transport SGL Data Block: Not Supported 00:23:46.328 Replay Protected Memory Block: Not Supported 00:23:46.328 00:23:46.328 Firmware Slot Information 00:23:46.328 ========================= 00:23:46.328 Active slot: 0 00:23:46.328 00:23:46.328 00:23:46.328 Error Log 00:23:46.328 ========= 00:23:46.328 00:23:46.328 Active Namespaces 00:23:46.328 ================= 00:23:46.328 Discovery Log Page 00:23:46.328 ================== 00:23:46.328 Generation Counter: 2 00:23:46.328 Number of Records: 2 00:23:46.328 Record Format: 0 00:23:46.328 00:23:46.328 Discovery Log Entry 0 00:23:46.328 ---------------------- 00:23:46.328 Transport Type: 3 (TCP) 00:23:46.328 Address Family: 1 (IPv4) 00:23:46.328 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:46.328 Entry Flags: 00:23:46.328 Duplicate Returned Information: 1 00:23:46.328 Explicit Persistent Connection Support for Discovery: 1 00:23:46.328 Transport Requirements: 00:23:46.328 Secure Channel: Not Required 00:23:46.328 Port ID: 0 (0x0000) 00:23:46.328 Controller ID: 65535 (0xffff) 00:23:46.328 Admin Max SQ Size: 128 00:23:46.328 Transport Service Identifier: 4420 00:23:46.328 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:46.328 Transport Address: 10.0.0.2 00:23:46.328 Discovery Log Entry 1 00:23:46.328 ---------------------- 00:23:46.328 Transport Type: 3 (TCP) 00:23:46.328 Address Family: 1 (IPv4) 00:23:46.328 Subsystem Type: 2 (NVM Subsystem) 00:23:46.328 Entry Flags: 00:23:46.329 Duplicate Returned Information: 0 00:23:46.329 Explicit Persistent Connection Support for Discovery: 0 00:23:46.329 Transport Requirements: 00:23:46.329 Secure Channel: Not Required 00:23:46.329 Port ID: 0 (0x0000) 00:23:46.329 Controller ID: 65535 (0xffff) 00:23:46.329 Admin Max SQ Size: 128 00:23:46.329 Transport Service Identifier: 4420 00:23:46.329 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:46.329 Transport Address: 10.0.0.2 [2024-11-04 12:28:20.881856] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:46.329 [2024-11-04 12:28:20.881866] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5480) on tqpair=0x965760 00:23:46.329 [2024-11-04 12:28:20.881872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.329 [2024-11-04 12:28:20.881878] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5600) on tqpair=0x965760 00:23:46.329 [2024-11-04 12:28:20.881883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.329 [2024-11-04 12:28:20.881888] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5780) on tqpair=0x965760 00:23:46.329 [2024-11-04 12:28:20.881893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.329 [2024-11-04 12:28:20.881898] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5900) on tqpair=0x965760 00:23:46.329 [2024-11-04 12:28:20.881902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.329 [2024-11-04 12:28:20.881911] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.881915] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.881919] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x965760) 00:23:46.329 [2024-11-04 12:28:20.881926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.329 [2024-11-04 12:28:20.881940] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5900, cid 3, qid 0 00:23:46.329 [2024-11-04 12:28:20.882138] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.329 [2024-11-04 12:28:20.882144] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.329 [2024-11-04 12:28:20.882148] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.882152] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5900) on tqpair=0x965760 00:23:46.329 [2024-11-04 12:28:20.882159] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.882163] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.882166] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x965760) 00:23:46.329 [2024-11-04 12:28:20.882173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.329 [2024-11-04 12:28:20.882186] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5900, cid 3, qid 0 00:23:46.329 [2024-11-04 12:28:20.882423] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.329 [2024-11-04 12:28:20.882429] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.329 [2024-11-04 12:28:20.882433] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.882437] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5900) on tqpair=0x965760 00:23:46.329 [2024-11-04 12:28:20.882444] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:46.329 [2024-11-04 12:28:20.882449] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:46.329 [2024-11-04 12:28:20.882458] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.882462] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.882468] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x965760) 00:23:46.329 [2024-11-04 12:28:20.882475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.329 [2024-11-04 12:28:20.882485] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5900, cid 3, qid 0 00:23:46.329 [2024-11-04 12:28:20.882686] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.329 [2024-11-04 12:28:20.882692] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.329 [2024-11-04 12:28:20.882696] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.882700] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5900) on tqpair=0x965760 00:23:46.329 [2024-11-04 12:28:20.882710] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.882714] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.882717] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x965760) 00:23:46.329 [2024-11-04 12:28:20.882724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.329 [2024-11-04 12:28:20.882734] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5900, cid 3, qid 0 00:23:46.329 [2024-11-04 12:28:20.882907] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.329 [2024-11-04 12:28:20.882914] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.329 [2024-11-04 12:28:20.882918] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.882922] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5900) on tqpair=0x965760 00:23:46.329 [2024-11-04 12:28:20.882931] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.882935] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.882939] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x965760) 00:23:46.329 [2024-11-04 12:28:20.882946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.329 [2024-11-04 12:28:20.882956] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5900, cid 3, qid 0 00:23:46.329 [2024-11-04 12:28:20.883162] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.329 [2024-11-04 12:28:20.883169] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.329 [2024-11-04 12:28:20.883172] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.883176] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5900) on tqpair=0x965760 00:23:46.329 [2024-11-04 12:28:20.883186] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.883190] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.883193] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x965760) 00:23:46.329 [2024-11-04 12:28:20.883200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.329 [2024-11-04 12:28:20.883210] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5900, cid 3, qid 0 00:23:46.329 [2024-11-04 12:28:20.883419] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.329 [2024-11-04 12:28:20.883425] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.329 [2024-11-04 12:28:20.883429] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.883433] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5900) on tqpair=0x965760 00:23:46.329 [2024-11-04 12:28:20.883442] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.883446] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.883450] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x965760) 00:23:46.329 [2024-11-04 12:28:20.883461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.329 [2024-11-04 12:28:20.883471] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5900, cid 3, qid 0 00:23:46.329 [2024-11-04 12:28:20.883649] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.329 [2024-11-04 12:28:20.883656] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.329 [2024-11-04 12:28:20.883659] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.883663] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5900) on tqpair=0x965760 00:23:46.329 [2024-11-04 12:28:20.883673] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.883677] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.883681] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x965760) 00:23:46.329 [2024-11-04 12:28:20.883688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.329 [2024-11-04 12:28:20.883697] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5900, cid 3, qid 0 00:23:46.329 [2024-11-04 12:28:20.883876] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.329 [2024-11-04 12:28:20.883883] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.329 [2024-11-04 12:28:20.883886] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.883890] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5900) on tqpair=0x965760 00:23:46.329 [2024-11-04 12:28:20.883900] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.883904] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.883907] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x965760) 00:23:46.329 [2024-11-04 12:28:20.883914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.329 [2024-11-04 12:28:20.883925] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5900, cid 3, qid 0 00:23:46.329 [2024-11-04 12:28:20.884099] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.329 [2024-11-04 12:28:20.884106] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.329 [2024-11-04 12:28:20.884109] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.329 [2024-11-04 12:28:20.884113] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5900) on tqpair=0x965760 00:23:46.330 [2024-11-04 12:28:20.884122] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.884126] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.884130] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x965760) 00:23:46.330 [2024-11-04 12:28:20.884137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.330 [2024-11-04 12:28:20.884147] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5900, cid 3, qid 0 00:23:46.330 [2024-11-04 12:28:20.884312] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.330 [2024-11-04 12:28:20.884319] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.330 [2024-11-04 12:28:20.884323] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.884326] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5900) on tqpair=0x965760 00:23:46.330 [2024-11-04 12:28:20.884336] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.884340] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.884344] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x965760) 00:23:46.330 [2024-11-04 12:28:20.884350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.330 [2024-11-04 12:28:20.884363] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5900, cid 3, qid 0 00:23:46.330 [2024-11-04 12:28:20.884536] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.330 [2024-11-04 12:28:20.884542] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.330 [2024-11-04 12:28:20.884546] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.884550] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5900) on tqpair=0x965760 00:23:46.330 [2024-11-04 12:28:20.884559] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.884563] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.884567] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x965760) 00:23:46.330 [2024-11-04 12:28:20.884574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.330 [2024-11-04 12:28:20.884584] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5900, cid 3, qid 0 00:23:46.330 [2024-11-04 12:28:20.884760] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.330 [2024-11-04 12:28:20.884767] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.330 [2024-11-04 12:28:20.884770] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.884774] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5900) on tqpair=0x965760 00:23:46.330 [2024-11-04 12:28:20.884784] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.884788] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.884791] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x965760) 00:23:46.330 [2024-11-04 12:28:20.884798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.330 [2024-11-04 12:28:20.884809] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5900, cid 3, qid 0 00:23:46.330 [2024-11-04 12:28:20.884975] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.330 [2024-11-04 12:28:20.884981] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.330 [2024-11-04 12:28:20.884985] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.884989] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5900) on tqpair=0x965760 00:23:46.330 [2024-11-04 12:28:20.884998] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.885002] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.885006] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x965760) 00:23:46.330 [2024-11-04 12:28:20.885013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.330 [2024-11-04 12:28:20.885023] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5900, cid 3, qid 0 00:23:46.330 [2024-11-04 12:28:20.885213] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.330 [2024-11-04 12:28:20.885219] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.330 [2024-11-04 12:28:20.885223] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.885227] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5900) on tqpair=0x965760 00:23:46.330 [2024-11-04 12:28:20.885236] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.885240] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.885244] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x965760) 00:23:46.330 [2024-11-04 12:28:20.885251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.330 [2024-11-04 12:28:20.885261] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5900, cid 3, qid 0 00:23:46.330 [2024-11-04 12:28:20.885456] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.330 [2024-11-04 12:28:20.885462] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.330 [2024-11-04 12:28:20.885466] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.885470] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5900) on tqpair=0x965760 00:23:46.330 [2024-11-04 12:28:20.885479] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.885483] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.885487] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x965760) 00:23:46.330 [2024-11-04 12:28:20.885494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.330 [2024-11-04 12:28:20.885504] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5900, cid 3, qid 0 00:23:46.330 [2024-11-04 12:28:20.885673] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.330 [2024-11-04 12:28:20.885680] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.330 [2024-11-04 12:28:20.885683] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.885687] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5900) on tqpair=0x965760 00:23:46.330 [2024-11-04 12:28:20.885697] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.885701] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.885704] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x965760) 00:23:46.330 [2024-11-04 12:28:20.885711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.330 [2024-11-04 12:28:20.885721] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c5900, cid 3, qid 0 00:23:46.330 [2024-11-04 12:28:20.889755] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.330 [2024-11-04 12:28:20.889764] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.330 [2024-11-04 12:28:20.889767] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.330 [2024-11-04 12:28:20.889771] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c5900) on tqpair=0x965760 00:23:46.330 [2024-11-04 12:28:20.889779] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:23:46.594 00:23:46.594 12:28:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:46.594 [2024-11-04 12:28:20.928740] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:23:46.594 [2024-11-04 12:28:20.928790] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1734585 ] 00:23:46.594 [2024-11-04 12:28:20.960314] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:46.594 [2024-11-04 12:28:20.960355] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:46.594 [2024-11-04 12:28:20.960360] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:46.594 [2024-11-04 12:28:20.960372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:46.594 [2024-11-04 12:28:20.960381] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:46.594 [2024-11-04 12:28:20.963956] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:46.594 [2024-11-04 12:28:20.963985] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7ab760 0 00:23:46.594 [2024-11-04 12:28:20.971758] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:46.594 [2024-11-04 12:28:20.971769] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:46.594 [2024-11-04 12:28:20.971778] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:46.594 [2024-11-04 12:28:20.971781] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:46.594 [2024-11-04 12:28:20.971804] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.594 [2024-11-04 12:28:20.971810] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.594 [2024-11-04 12:28:20.971814] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ab760) 00:23:46.594 [2024-11-04 12:28:20.971825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:46.594 [2024-11-04 12:28:20.971842] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b480, cid 0, qid 0 00:23:46.594 [2024-11-04 12:28:20.978757] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.594 [2024-11-04 12:28:20.978767] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.594 [2024-11-04 12:28:20.978771] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.594 [2024-11-04 12:28:20.978775] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b480) on tqpair=0x7ab760 00:23:46.594 [2024-11-04 12:28:20.978784] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:46.594 [2024-11-04 12:28:20.978791] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:46.594 [2024-11-04 12:28:20.978796] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:46.594 [2024-11-04 12:28:20.978809] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.594 [2024-11-04 12:28:20.978813] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.594 [2024-11-04 12:28:20.978817] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ab760) 00:23:46.595 [2024-11-04 12:28:20.978825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.595 [2024-11-04 12:28:20.978839] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b480, cid 0, qid 0 00:23:46.595 [2024-11-04 12:28:20.978987] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.595 [2024-11-04 12:28:20.978994] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.595 [2024-11-04 12:28:20.978997] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.979001] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b480) on tqpair=0x7ab760 00:23:46.595 [2024-11-04 12:28:20.979006] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:46.595 [2024-11-04 12:28:20.979014] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:46.595 [2024-11-04 12:28:20.979021] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.979024] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.979028] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ab760) 00:23:46.595 [2024-11-04 12:28:20.979035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.595 [2024-11-04 12:28:20.979045] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b480, cid 0, qid 0 00:23:46.595 [2024-11-04 12:28:20.979208] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.595 [2024-11-04 12:28:20.979214] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.595 [2024-11-04 12:28:20.979221] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.979225] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b480) on tqpair=0x7ab760 00:23:46.595 [2024-11-04 12:28:20.979230] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:46.595 [2024-11-04 12:28:20.979238] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:46.595 [2024-11-04 12:28:20.979245] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.979248] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.979252] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ab760) 00:23:46.595 [2024-11-04 12:28:20.979259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.595 [2024-11-04 12:28:20.979270] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b480, cid 0, qid 0 00:23:46.595 [2024-11-04 12:28:20.979427] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.595 [2024-11-04 12:28:20.979433] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.595 [2024-11-04 12:28:20.979437] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.979441] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b480) on tqpair=0x7ab760 00:23:46.595 [2024-11-04 12:28:20.979445] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:46.595 [2024-11-04 12:28:20.979455] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.979459] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.979463] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ab760) 00:23:46.595 [2024-11-04 12:28:20.979470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.595 [2024-11-04 12:28:20.979480] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b480, cid 0, qid 0 00:23:46.595 [2024-11-04 12:28:20.979602] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.595 [2024-11-04 12:28:20.979609] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.595 [2024-11-04 12:28:20.979612] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.979616] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b480) on tqpair=0x7ab760 00:23:46.595 [2024-11-04 12:28:20.979621] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:46.595 [2024-11-04 12:28:20.979625] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:46.595 [2024-11-04 12:28:20.979632] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:46.595 [2024-11-04 12:28:20.979738] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:46.595 [2024-11-04 12:28:20.979742] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:46.595 [2024-11-04 12:28:20.979754] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.979758] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.979762] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ab760) 00:23:46.595 [2024-11-04 12:28:20.979769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.595 [2024-11-04 12:28:20.979779] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b480, cid 0, qid 0 00:23:46.595 [2024-11-04 12:28:20.979977] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.595 [2024-11-04 12:28:20.979984] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.595 [2024-11-04 12:28:20.979987] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.979991] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b480) on tqpair=0x7ab760 00:23:46.595 [2024-11-04 12:28:20.979996] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:46.595 [2024-11-04 12:28:20.980005] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.980009] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.980012] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ab760) 00:23:46.595 [2024-11-04 12:28:20.980019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.595 [2024-11-04 12:28:20.980029] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b480, cid 0, qid 0 00:23:46.595 [2024-11-04 12:28:20.980208] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.595 [2024-11-04 12:28:20.980214] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.595 [2024-11-04 12:28:20.980218] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.980222] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b480) on tqpair=0x7ab760 00:23:46.595 [2024-11-04 12:28:20.980226] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:46.595 [2024-11-04 12:28:20.980231] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:46.595 [2024-11-04 12:28:20.980238] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:46.595 [2024-11-04 12:28:20.980245] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:46.595 [2024-11-04 12:28:20.980254] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.980257] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ab760) 00:23:46.595 [2024-11-04 12:28:20.980264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.595 [2024-11-04 12:28:20.980275] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b480, cid 0, qid 0 00:23:46.595 [2024-11-04 12:28:20.980485] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.595 [2024-11-04 12:28:20.980492] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.595 [2024-11-04 12:28:20.980495] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.980499] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7ab760): datao=0, datal=4096, cccid=0 00:23:46.595 [2024-11-04 12:28:20.980504] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x80b480) on tqpair(0x7ab760): expected_datao=0, payload_size=4096 00:23:46.595 [2024-11-04 12:28:20.980509] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.980553] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.980558] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.980662] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.595 [2024-11-04 12:28:20.980668] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.595 [2024-11-04 12:28:20.980672] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.980676] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b480) on tqpair=0x7ab760 00:23:46.595 [2024-11-04 12:28:20.980685] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:46.595 [2024-11-04 12:28:20.980690] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:46.595 [2024-11-04 12:28:20.980694] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:46.595 [2024-11-04 12:28:20.980698] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:46.595 [2024-11-04 12:28:20.980703] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:46.595 [2024-11-04 12:28:20.980707] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:46.595 [2024-11-04 12:28:20.980715] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:46.595 [2024-11-04 12:28:20.980722] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.980726] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.980729] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ab760) 00:23:46.595 [2024-11-04 12:28:20.980736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:46.595 [2024-11-04 12:28:20.980755] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b480, cid 0, qid 0 00:23:46.595 [2024-11-04 12:28:20.980983] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.595 [2024-11-04 12:28:20.980989] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.595 [2024-11-04 12:28:20.980993] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.980997] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b480) on tqpair=0x7ab760 00:23:46.595 [2024-11-04 12:28:20.981006] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.595 [2024-11-04 12:28:20.981010] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:20.981014] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7ab760) 00:23:46.596 [2024-11-04 12:28:20.981020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.596 [2024-11-04 12:28:20.981026] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:20.981030] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:20.981033] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7ab760) 00:23:46.596 [2024-11-04 12:28:20.981039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.596 [2024-11-04 12:28:20.981045] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:20.981049] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:20.981053] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7ab760) 00:23:46.596 [2024-11-04 12:28:20.981058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.596 [2024-11-04 12:28:20.981065] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:20.981068] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:20.981072] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ab760) 00:23:46.596 [2024-11-04 12:28:20.981078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.596 [2024-11-04 12:28:20.981082] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:46.596 [2024-11-04 12:28:20.981090] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:46.596 [2024-11-04 12:28:20.981098] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:20.981102] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7ab760) 00:23:46.596 [2024-11-04 12:28:20.981109] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.596 [2024-11-04 12:28:20.981121] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b480, cid 0, qid 0 00:23:46.596 [2024-11-04 12:28:20.981126] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b600, cid 1, qid 0 00:23:46.596 [2024-11-04 12:28:20.981131] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b780, cid 2, qid 0 00:23:46.596 [2024-11-04 12:28:20.981136] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b900, cid 3, qid 0 00:23:46.596 [2024-11-04 12:28:20.981141] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80ba80, cid 4, qid 0 00:23:46.596 [2024-11-04 12:28:20.981245] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.596 [2024-11-04 12:28:20.981251] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.596 [2024-11-04 12:28:20.981254] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:20.981258] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80ba80) on tqpair=0x7ab760 00:23:46.596 [2024-11-04 12:28:20.981265] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:46.596 [2024-11-04 12:28:20.981270] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:46.596 [2024-11-04 12:28:20.981278] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:46.596 [2024-11-04 12:28:20.981284] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:46.596 [2024-11-04 12:28:20.981290] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:20.981294] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:20.981297] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7ab760) 00:23:46.596 [2024-11-04 12:28:20.981304] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:46.596 [2024-11-04 12:28:20.981314] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80ba80, cid 4, qid 0 00:23:46.596 [2024-11-04 12:28:20.981426] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.596 [2024-11-04 12:28:20.981433] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.596 [2024-11-04 12:28:20.981436] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:20.981440] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80ba80) on tqpair=0x7ab760 00:23:46.596 [2024-11-04 12:28:20.981504] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:46.596 [2024-11-04 12:28:20.981513] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:46.596 [2024-11-04 12:28:20.981520] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:20.981524] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7ab760) 00:23:46.596 [2024-11-04 12:28:20.981530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.596 [2024-11-04 12:28:20.981541] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80ba80, cid 4, qid 0 00:23:46.596 [2024-11-04 12:28:20.981660] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.596 [2024-11-04 12:28:20.981666] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.596 [2024-11-04 12:28:20.981670] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:20.981674] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7ab760): datao=0, datal=4096, cccid=4 00:23:46.596 [2024-11-04 12:28:20.981678] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x80ba80) on tqpair(0x7ab760): expected_datao=0, payload_size=4096 00:23:46.596 [2024-11-04 12:28:20.981682] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:20.981699] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:20.981703] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:21.024755] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.596 [2024-11-04 12:28:21.024765] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.596 [2024-11-04 12:28:21.024769] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:21.024773] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80ba80) on tqpair=0x7ab760 00:23:46.596 [2024-11-04 12:28:21.024783] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:46.596 [2024-11-04 12:28:21.024798] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:46.596 [2024-11-04 12:28:21.024808] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:46.596 [2024-11-04 12:28:21.024815] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:21.024819] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7ab760) 00:23:46.596 [2024-11-04 12:28:21.024826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.596 [2024-11-04 12:28:21.024838] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80ba80, cid 4, qid 0 00:23:46.596 [2024-11-04 12:28:21.025013] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.596 [2024-11-04 12:28:21.025019] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.596 [2024-11-04 12:28:21.025023] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:21.025027] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7ab760): datao=0, datal=4096, cccid=4 00:23:46.596 [2024-11-04 12:28:21.025031] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x80ba80) on tqpair(0x7ab760): expected_datao=0, payload_size=4096 00:23:46.596 [2024-11-04 12:28:21.025036] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:21.025050] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:21.025054] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:21.065921] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.596 [2024-11-04 12:28:21.065931] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.596 [2024-11-04 12:28:21.065934] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:21.065938] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80ba80) on tqpair=0x7ab760 00:23:46.596 [2024-11-04 12:28:21.065951] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:46.596 [2024-11-04 12:28:21.065960] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:46.596 [2024-11-04 12:28:21.065968] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:21.065971] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7ab760) 00:23:46.596 [2024-11-04 12:28:21.065980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.596 [2024-11-04 12:28:21.065992] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80ba80, cid 4, qid 0 00:23:46.596 [2024-11-04 12:28:21.066154] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.596 [2024-11-04 12:28:21.066160] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.596 [2024-11-04 12:28:21.066164] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:21.066168] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7ab760): datao=0, datal=4096, cccid=4 00:23:46.596 [2024-11-04 12:28:21.066173] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x80ba80) on tqpair(0x7ab760): expected_datao=0, payload_size=4096 00:23:46.596 [2024-11-04 12:28:21.066177] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:21.066192] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:21.066196] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:21.110758] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.596 [2024-11-04 12:28:21.110769] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.596 [2024-11-04 12:28:21.110773] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.596 [2024-11-04 12:28:21.110777] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80ba80) on tqpair=0x7ab760 00:23:46.596 [2024-11-04 12:28:21.110785] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:46.596 [2024-11-04 12:28:21.110793] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:46.596 [2024-11-04 12:28:21.110803] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:46.597 [2024-11-04 12:28:21.110809] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:46.597 [2024-11-04 12:28:21.110815] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:46.597 [2024-11-04 12:28:21.110820] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:46.597 [2024-11-04 12:28:21.110825] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:46.597 [2024-11-04 12:28:21.110830] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:46.597 [2024-11-04 12:28:21.110835] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:46.597 [2024-11-04 12:28:21.110849] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.110853] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7ab760) 00:23:46.597 [2024-11-04 12:28:21.110860] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.597 [2024-11-04 12:28:21.110867] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.110871] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.110875] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7ab760) 00:23:46.597 [2024-11-04 12:28:21.110881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.597 [2024-11-04 12:28:21.110894] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80ba80, cid 4, qid 0 00:23:46.597 [2024-11-04 12:28:21.110899] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80bc00, cid 5, qid 0 00:23:46.597 [2024-11-04 12:28:21.110982] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.597 [2024-11-04 12:28:21.110989] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.597 [2024-11-04 12:28:21.110993] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.110997] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80ba80) on tqpair=0x7ab760 00:23:46.597 [2024-11-04 12:28:21.111004] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.597 [2024-11-04 12:28:21.111010] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.597 [2024-11-04 12:28:21.111013] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.111017] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80bc00) on tqpair=0x7ab760 00:23:46.597 [2024-11-04 12:28:21.111026] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.111030] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7ab760) 00:23:46.597 [2024-11-04 12:28:21.111037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.597 [2024-11-04 12:28:21.111047] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80bc00, cid 5, qid 0 00:23:46.597 [2024-11-04 12:28:21.111220] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.597 [2024-11-04 12:28:21.111226] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.597 [2024-11-04 12:28:21.111230] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.111234] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80bc00) on tqpair=0x7ab760 00:23:46.597 [2024-11-04 12:28:21.111243] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.111247] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7ab760) 00:23:46.597 [2024-11-04 12:28:21.111254] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.597 [2024-11-04 12:28:21.111264] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80bc00, cid 5, qid 0 00:23:46.597 [2024-11-04 12:28:21.111403] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.597 [2024-11-04 12:28:21.111409] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.597 [2024-11-04 12:28:21.111413] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.111417] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80bc00) on tqpair=0x7ab760 00:23:46.597 [2024-11-04 12:28:21.111426] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.111430] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7ab760) 00:23:46.597 [2024-11-04 12:28:21.111436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.597 [2024-11-04 12:28:21.111446] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80bc00, cid 5, qid 0 00:23:46.597 [2024-11-04 12:28:21.111625] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.597 [2024-11-04 12:28:21.111631] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.597 [2024-11-04 12:28:21.111635] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.111639] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80bc00) on tqpair=0x7ab760 00:23:46.597 [2024-11-04 12:28:21.111653] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.111657] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7ab760) 00:23:46.597 [2024-11-04 12:28:21.111664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.597 [2024-11-04 12:28:21.111672] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.111677] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7ab760) 00:23:46.597 [2024-11-04 12:28:21.111684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.597 [2024-11-04 12:28:21.111691] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.111695] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x7ab760) 00:23:46.597 [2024-11-04 12:28:21.111701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.597 [2024-11-04 12:28:21.111710] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.111714] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7ab760) 00:23:46.597 [2024-11-04 12:28:21.111720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.597 [2024-11-04 12:28:21.111732] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80bc00, cid 5, qid 0 00:23:46.597 [2024-11-04 12:28:21.111737] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80ba80, cid 4, qid 0 00:23:46.597 [2024-11-04 12:28:21.111742] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80bd80, cid 6, qid 0 00:23:46.597 [2024-11-04 12:28:21.111753] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80bf00, cid 7, qid 0 00:23:46.597 [2024-11-04 12:28:21.111895] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.597 [2024-11-04 12:28:21.111902] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.597 [2024-11-04 12:28:21.111906] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.111910] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7ab760): datao=0, datal=8192, cccid=5 00:23:46.597 [2024-11-04 12:28:21.111914] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x80bc00) on tqpair(0x7ab760): expected_datao=0, payload_size=8192 00:23:46.597 [2024-11-04 12:28:21.111919] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.111997] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.112001] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.112007] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.597 [2024-11-04 12:28:21.112013] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.597 [2024-11-04 12:28:21.112016] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.112020] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7ab760): datao=0, datal=512, cccid=4 00:23:46.597 [2024-11-04 12:28:21.112025] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x80ba80) on tqpair(0x7ab760): expected_datao=0, payload_size=512 00:23:46.597 [2024-11-04 12:28:21.112029] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.112048] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.112052] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.112058] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.597 [2024-11-04 12:28:21.112064] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.597 [2024-11-04 12:28:21.112068] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.112071] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7ab760): datao=0, datal=512, cccid=6 00:23:46.597 [2024-11-04 12:28:21.112076] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x80bd80) on tqpair(0x7ab760): expected_datao=0, payload_size=512 00:23:46.597 [2024-11-04 12:28:21.112080] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.112089] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.112093] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.112099] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.597 [2024-11-04 12:28:21.112104] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.597 [2024-11-04 12:28:21.112108] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.112111] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7ab760): datao=0, datal=4096, cccid=7 00:23:46.597 [2024-11-04 12:28:21.112116] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x80bf00) on tqpair(0x7ab760): expected_datao=0, payload_size=4096 00:23:46.597 [2024-11-04 12:28:21.112120] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.112127] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.112131] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.112253] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.597 [2024-11-04 12:28:21.112259] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.597 [2024-11-04 12:28:21.112263] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.112267] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80bc00) on tqpair=0x7ab760 00:23:46.597 [2024-11-04 12:28:21.112279] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.597 [2024-11-04 12:28:21.112285] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.597 [2024-11-04 12:28:21.112288] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.597 [2024-11-04 12:28:21.112292] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80ba80) on tqpair=0x7ab760 00:23:46.597 [2024-11-04 12:28:21.112302] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.597 [2024-11-04 12:28:21.112308] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.598 [2024-11-04 12:28:21.112312] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.598 [2024-11-04 12:28:21.112316] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80bd80) on tqpair=0x7ab760 00:23:46.598 [2024-11-04 12:28:21.112323] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.598 [2024-11-04 12:28:21.112329] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.598 [2024-11-04 12:28:21.112332] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.598 [2024-11-04 12:28:21.112336] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80bf00) on tqpair=0x7ab760 00:23:46.598 ===================================================== 00:23:46.598 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:46.598 ===================================================== 00:23:46.598 Controller Capabilities/Features 00:23:46.598 ================================ 00:23:46.598 Vendor ID: 8086 00:23:46.598 Subsystem Vendor ID: 8086 00:23:46.598 Serial Number: SPDK00000000000001 00:23:46.598 Model Number: SPDK bdev Controller 00:23:46.598 Firmware Version: 25.01 00:23:46.598 Recommended Arb Burst: 6 00:23:46.598 IEEE OUI Identifier: e4 d2 5c 00:23:46.598 Multi-path I/O 00:23:46.598 May have multiple subsystem ports: Yes 00:23:46.598 May have multiple controllers: Yes 00:23:46.598 Associated with SR-IOV VF: No 00:23:46.598 Max Data Transfer Size: 131072 00:23:46.598 Max Number of Namespaces: 32 00:23:46.598 Max Number of I/O Queues: 127 00:23:46.598 NVMe Specification Version (VS): 1.3 00:23:46.598 NVMe Specification Version (Identify): 1.3 00:23:46.598 Maximum Queue Entries: 128 00:23:46.598 Contiguous Queues Required: Yes 00:23:46.598 Arbitration Mechanisms Supported 00:23:46.598 Weighted Round Robin: Not Supported 00:23:46.598 Vendor Specific: Not Supported 00:23:46.598 Reset Timeout: 15000 ms 00:23:46.598 Doorbell Stride: 4 bytes 00:23:46.598 NVM Subsystem Reset: Not Supported 00:23:46.598 Command Sets Supported 00:23:46.598 NVM Command Set: Supported 00:23:46.598 Boot Partition: Not Supported 00:23:46.598 Memory Page Size Minimum: 4096 bytes 00:23:46.598 Memory Page Size Maximum: 4096 bytes 00:23:46.598 Persistent Memory Region: Not Supported 00:23:46.598 Optional Asynchronous Events Supported 00:23:46.598 Namespace Attribute Notices: Supported 00:23:46.598 Firmware Activation Notices: Not Supported 00:23:46.598 ANA Change Notices: Not Supported 00:23:46.598 PLE Aggregate Log Change Notices: Not Supported 00:23:46.598 LBA Status Info Alert Notices: Not Supported 00:23:46.598 EGE Aggregate Log Change Notices: Not Supported 00:23:46.598 Normal NVM Subsystem Shutdown event: Not Supported 00:23:46.598 Zone Descriptor Change Notices: Not Supported 00:23:46.598 Discovery Log Change Notices: Not Supported 00:23:46.598 Controller Attributes 00:23:46.598 128-bit Host Identifier: Supported 00:23:46.598 Non-Operational Permissive Mode: Not Supported 00:23:46.598 NVM Sets: Not Supported 00:23:46.598 Read Recovery Levels: Not Supported 00:23:46.598 Endurance Groups: Not Supported 00:23:46.598 Predictable Latency Mode: Not Supported 00:23:46.598 Traffic Based Keep ALive: Not Supported 00:23:46.598 Namespace Granularity: Not Supported 00:23:46.598 SQ Associations: Not Supported 00:23:46.598 UUID List: Not Supported 00:23:46.598 Multi-Domain Subsystem: Not Supported 00:23:46.598 Fixed Capacity Management: Not Supported 00:23:46.598 Variable Capacity Management: Not Supported 00:23:46.598 Delete Endurance Group: Not Supported 00:23:46.598 Delete NVM Set: Not Supported 00:23:46.598 Extended LBA Formats Supported: Not Supported 00:23:46.598 Flexible Data Placement Supported: Not Supported 00:23:46.598 00:23:46.598 Controller Memory Buffer Support 00:23:46.598 ================================ 00:23:46.598 Supported: No 00:23:46.598 00:23:46.598 Persistent Memory Region Support 00:23:46.598 ================================ 00:23:46.598 Supported: No 00:23:46.598 00:23:46.598 Admin Command Set Attributes 00:23:46.598 ============================ 00:23:46.598 Security Send/Receive: Not Supported 00:23:46.598 Format NVM: Not Supported 00:23:46.598 Firmware Activate/Download: Not Supported 00:23:46.598 Namespace Management: Not Supported 00:23:46.598 Device Self-Test: Not Supported 00:23:46.598 Directives: Not Supported 00:23:46.598 NVMe-MI: Not Supported 00:23:46.598 Virtualization Management: Not Supported 00:23:46.598 Doorbell Buffer Config: Not Supported 00:23:46.598 Get LBA Status Capability: Not Supported 00:23:46.598 Command & Feature Lockdown Capability: Not Supported 00:23:46.598 Abort Command Limit: 4 00:23:46.598 Async Event Request Limit: 4 00:23:46.598 Number of Firmware Slots: N/A 00:23:46.598 Firmware Slot 1 Read-Only: N/A 00:23:46.598 Firmware Activation Without Reset: N/A 00:23:46.598 Multiple Update Detection Support: N/A 00:23:46.598 Firmware Update Granularity: No Information Provided 00:23:46.598 Per-Namespace SMART Log: No 00:23:46.598 Asymmetric Namespace Access Log Page: Not Supported 00:23:46.598 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:46.598 Command Effects Log Page: Supported 00:23:46.598 Get Log Page Extended Data: Supported 00:23:46.598 Telemetry Log Pages: Not Supported 00:23:46.598 Persistent Event Log Pages: Not Supported 00:23:46.598 Supported Log Pages Log Page: May Support 00:23:46.598 Commands Supported & Effects Log Page: Not Supported 00:23:46.598 Feature Identifiers & Effects Log Page:May Support 00:23:46.598 NVMe-MI Commands & Effects Log Page: May Support 00:23:46.598 Data Area 4 for Telemetry Log: Not Supported 00:23:46.598 Error Log Page Entries Supported: 128 00:23:46.598 Keep Alive: Supported 00:23:46.598 Keep Alive Granularity: 10000 ms 00:23:46.598 00:23:46.598 NVM Command Set Attributes 00:23:46.598 ========================== 00:23:46.598 Submission Queue Entry Size 00:23:46.598 Max: 64 00:23:46.598 Min: 64 00:23:46.598 Completion Queue Entry Size 00:23:46.598 Max: 16 00:23:46.598 Min: 16 00:23:46.598 Number of Namespaces: 32 00:23:46.598 Compare Command: Supported 00:23:46.598 Write Uncorrectable Command: Not Supported 00:23:46.598 Dataset Management Command: Supported 00:23:46.598 Write Zeroes Command: Supported 00:23:46.598 Set Features Save Field: Not Supported 00:23:46.598 Reservations: Supported 00:23:46.598 Timestamp: Not Supported 00:23:46.598 Copy: Supported 00:23:46.598 Volatile Write Cache: Present 00:23:46.598 Atomic Write Unit (Normal): 1 00:23:46.598 Atomic Write Unit (PFail): 1 00:23:46.598 Atomic Compare & Write Unit: 1 00:23:46.598 Fused Compare & Write: Supported 00:23:46.598 Scatter-Gather List 00:23:46.598 SGL Command Set: Supported 00:23:46.598 SGL Keyed: Supported 00:23:46.598 SGL Bit Bucket Descriptor: Not Supported 00:23:46.598 SGL Metadata Pointer: Not Supported 00:23:46.598 Oversized SGL: Not Supported 00:23:46.598 SGL Metadata Address: Not Supported 00:23:46.598 SGL Offset: Supported 00:23:46.598 Transport SGL Data Block: Not Supported 00:23:46.598 Replay Protected Memory Block: Not Supported 00:23:46.598 00:23:46.598 Firmware Slot Information 00:23:46.598 ========================= 00:23:46.598 Active slot: 1 00:23:46.598 Slot 1 Firmware Revision: 25.01 00:23:46.598 00:23:46.598 00:23:46.598 Commands Supported and Effects 00:23:46.598 ============================== 00:23:46.598 Admin Commands 00:23:46.598 -------------- 00:23:46.598 Get Log Page (02h): Supported 00:23:46.598 Identify (06h): Supported 00:23:46.598 Abort (08h): Supported 00:23:46.598 Set Features (09h): Supported 00:23:46.598 Get Features (0Ah): Supported 00:23:46.598 Asynchronous Event Request (0Ch): Supported 00:23:46.598 Keep Alive (18h): Supported 00:23:46.598 I/O Commands 00:23:46.598 ------------ 00:23:46.598 Flush (00h): Supported LBA-Change 00:23:46.598 Write (01h): Supported LBA-Change 00:23:46.598 Read (02h): Supported 00:23:46.598 Compare (05h): Supported 00:23:46.598 Write Zeroes (08h): Supported LBA-Change 00:23:46.598 Dataset Management (09h): Supported LBA-Change 00:23:46.598 Copy (19h): Supported LBA-Change 00:23:46.598 00:23:46.598 Error Log 00:23:46.598 ========= 00:23:46.598 00:23:46.598 Arbitration 00:23:46.598 =========== 00:23:46.598 Arbitration Burst: 1 00:23:46.598 00:23:46.598 Power Management 00:23:46.598 ================ 00:23:46.598 Number of Power States: 1 00:23:46.598 Current Power State: Power State #0 00:23:46.598 Power State #0: 00:23:46.598 Max Power: 0.00 W 00:23:46.598 Non-Operational State: Operational 00:23:46.598 Entry Latency: Not Reported 00:23:46.598 Exit Latency: Not Reported 00:23:46.598 Relative Read Throughput: 0 00:23:46.598 Relative Read Latency: 0 00:23:46.598 Relative Write Throughput: 0 00:23:46.598 Relative Write Latency: 0 00:23:46.598 Idle Power: Not Reported 00:23:46.598 Active Power: Not Reported 00:23:46.598 Non-Operational Permissive Mode: Not Supported 00:23:46.598 00:23:46.598 Health Information 00:23:46.598 ================== 00:23:46.598 Critical Warnings: 00:23:46.598 Available Spare Space: OK 00:23:46.598 Temperature: OK 00:23:46.598 Device Reliability: OK 00:23:46.599 Read Only: No 00:23:46.599 Volatile Memory Backup: OK 00:23:46.599 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:46.599 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:46.599 Available Spare: 0% 00:23:46.599 Available Spare Threshold: 0% 00:23:46.599 Life Percentage Used:[2024-11-04 12:28:21.112435] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.112441] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7ab760) 00:23:46.599 [2024-11-04 12:28:21.112448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.599 [2024-11-04 12:28:21.112460] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80bf00, cid 7, qid 0 00:23:46.599 [2024-11-04 12:28:21.112659] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.599 [2024-11-04 12:28:21.112665] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.599 [2024-11-04 12:28:21.112669] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.112673] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80bf00) on tqpair=0x7ab760 00:23:46.599 [2024-11-04 12:28:21.112701] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:46.599 [2024-11-04 12:28:21.112711] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b480) on tqpair=0x7ab760 00:23:46.599 [2024-11-04 12:28:21.112717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.599 [2024-11-04 12:28:21.112723] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b600) on tqpair=0x7ab760 00:23:46.599 [2024-11-04 12:28:21.112729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.599 [2024-11-04 12:28:21.112735] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b780) on tqpair=0x7ab760 00:23:46.599 [2024-11-04 12:28:21.112739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.599 [2024-11-04 12:28:21.112744] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b900) on tqpair=0x7ab760 00:23:46.599 [2024-11-04 12:28:21.112757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.599 [2024-11-04 12:28:21.112765] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.112769] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.112773] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ab760) 00:23:46.599 [2024-11-04 12:28:21.112780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.599 [2024-11-04 12:28:21.112792] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b900, cid 3, qid 0 00:23:46.599 [2024-11-04 12:28:21.112880] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.599 [2024-11-04 12:28:21.112887] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.599 [2024-11-04 12:28:21.112890] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.112894] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b900) on tqpair=0x7ab760 00:23:46.599 [2024-11-04 12:28:21.112901] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.112905] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.112908] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ab760) 00:23:46.599 [2024-11-04 12:28:21.112915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.599 [2024-11-04 12:28:21.112928] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b900, cid 3, qid 0 00:23:46.599 [2024-11-04 12:28:21.113119] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.599 [2024-11-04 12:28:21.113125] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.599 [2024-11-04 12:28:21.113129] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.113132] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b900) on tqpair=0x7ab760 00:23:46.599 [2024-11-04 12:28:21.113137] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:46.599 [2024-11-04 12:28:21.113142] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:46.599 [2024-11-04 12:28:21.113151] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.113155] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.113159] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ab760) 00:23:46.599 [2024-11-04 12:28:21.113166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.599 [2024-11-04 12:28:21.113176] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b900, cid 3, qid 0 00:23:46.599 [2024-11-04 12:28:21.113331] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.599 [2024-11-04 12:28:21.113337] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.599 [2024-11-04 12:28:21.113341] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.113345] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b900) on tqpair=0x7ab760 00:23:46.599 [2024-11-04 12:28:21.113354] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.113360] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.113364] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ab760) 00:23:46.599 [2024-11-04 12:28:21.113371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.599 [2024-11-04 12:28:21.113381] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b900, cid 3, qid 0 00:23:46.599 [2024-11-04 12:28:21.113541] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.599 [2024-11-04 12:28:21.113548] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.599 [2024-11-04 12:28:21.113551] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.113555] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b900) on tqpair=0x7ab760 00:23:46.599 [2024-11-04 12:28:21.113564] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.113569] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.113572] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ab760) 00:23:46.599 [2024-11-04 12:28:21.113579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.599 [2024-11-04 12:28:21.113589] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b900, cid 3, qid 0 00:23:46.599 [2024-11-04 12:28:21.113756] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.599 [2024-11-04 12:28:21.113763] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.599 [2024-11-04 12:28:21.113767] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.113771] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b900) on tqpair=0x7ab760 00:23:46.599 [2024-11-04 12:28:21.113780] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.113784] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.113788] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ab760) 00:23:46.599 [2024-11-04 12:28:21.113795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.599 [2024-11-04 12:28:21.113806] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b900, cid 3, qid 0 00:23:46.599 [2024-11-04 12:28:21.113942] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.599 [2024-11-04 12:28:21.113949] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.599 [2024-11-04 12:28:21.113952] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.113956] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b900) on tqpair=0x7ab760 00:23:46.599 [2024-11-04 12:28:21.113966] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.113970] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.113973] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ab760) 00:23:46.599 [2024-11-04 12:28:21.113980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.599 [2024-11-04 12:28:21.113990] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b900, cid 3, qid 0 00:23:46.599 [2024-11-04 12:28:21.114171] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.599 [2024-11-04 12:28:21.114178] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.599 [2024-11-04 12:28:21.114181] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.114185] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b900) on tqpair=0x7ab760 00:23:46.599 [2024-11-04 12:28:21.114195] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.114199] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.599 [2024-11-04 12:28:21.114202] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ab760) 00:23:46.600 [2024-11-04 12:28:21.114211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.600 [2024-11-04 12:28:21.114222] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b900, cid 3, qid 0 00:23:46.600 [2024-11-04 12:28:21.114437] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.600 [2024-11-04 12:28:21.114444] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.600 [2024-11-04 12:28:21.114447] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.600 [2024-11-04 12:28:21.114451] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b900) on tqpair=0x7ab760 00:23:46.600 [2024-11-04 12:28:21.114461] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.600 [2024-11-04 12:28:21.114465] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.600 [2024-11-04 12:28:21.114468] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ab760) 00:23:46.600 [2024-11-04 12:28:21.114475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.600 [2024-11-04 12:28:21.114485] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b900, cid 3, qid 0 00:23:46.600 [2024-11-04 12:28:21.114625] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.600 [2024-11-04 12:28:21.114632] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.600 [2024-11-04 12:28:21.114636] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.600 [2024-11-04 12:28:21.114640] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b900) on tqpair=0x7ab760 00:23:46.600 [2024-11-04 12:28:21.114649] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.600 [2024-11-04 12:28:21.114653] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.600 [2024-11-04 12:28:21.114657] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7ab760) 00:23:46.600 [2024-11-04 12:28:21.114664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.600 [2024-11-04 12:28:21.114674] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b900, cid 3, qid 0 00:23:46.600 [2024-11-04 12:28:21.118757] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.600 [2024-11-04 12:28:21.118766] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.600 [2024-11-04 12:28:21.118770] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.600 [2024-11-04 12:28:21.118774] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x80b900) on tqpair=0x7ab760 00:23:46.600 [2024-11-04 12:28:21.118781] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:23:46.600 0% 00:23:46.600 Data Units Read: 0 00:23:46.600 Data Units Written: 0 00:23:46.600 Host Read Commands: 0 00:23:46.600 Host Write Commands: 0 00:23:46.600 Controller Busy Time: 0 minutes 00:23:46.600 Power Cycles: 0 00:23:46.600 Power On Hours: 0 hours 00:23:46.600 Unsafe Shutdowns: 0 00:23:46.600 Unrecoverable Media Errors: 0 00:23:46.600 Lifetime Error Log Entries: 0 00:23:46.600 Warning Temperature Time: 0 minutes 00:23:46.600 Critical Temperature Time: 0 minutes 00:23:46.600 00:23:46.600 Number of Queues 00:23:46.600 ================ 00:23:46.600 Number of I/O Submission Queues: 127 00:23:46.600 Number of I/O Completion Queues: 127 00:23:46.600 00:23:46.600 Active Namespaces 00:23:46.600 ================= 00:23:46.600 Namespace ID:1 00:23:46.600 Error Recovery Timeout: Unlimited 00:23:46.600 Command Set Identifier: NVM (00h) 00:23:46.600 Deallocate: Supported 00:23:46.600 Deallocated/Unwritten Error: Not Supported 00:23:46.600 Deallocated Read Value: Unknown 00:23:46.600 Deallocate in Write Zeroes: Not Supported 00:23:46.600 Deallocated Guard Field: 0xFFFF 00:23:46.600 Flush: Supported 00:23:46.600 Reservation: Supported 00:23:46.600 Namespace Sharing Capabilities: Multiple Controllers 00:23:46.600 Size (in LBAs): 131072 (0GiB) 00:23:46.600 Capacity (in LBAs): 131072 (0GiB) 00:23:46.600 Utilization (in LBAs): 131072 (0GiB) 00:23:46.600 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:46.600 EUI64: ABCDEF0123456789 00:23:46.600 UUID: a2af9a56-5af4-4f27-a588-9e88262bcbc0 00:23:46.600 Thin Provisioning: Not Supported 00:23:46.600 Per-NS Atomic Units: Yes 00:23:46.600 Atomic Boundary Size (Normal): 0 00:23:46.600 Atomic Boundary Size (PFail): 0 00:23:46.600 Atomic Boundary Offset: 0 00:23:46.600 Maximum Single Source Range Length: 65535 00:23:46.600 Maximum Copy Length: 65535 00:23:46.600 Maximum Source Range Count: 1 00:23:46.600 NGUID/EUI64 Never Reused: No 00:23:46.600 Namespace Write Protected: No 00:23:46.600 Number of LBA Formats: 1 00:23:46.600 Current LBA Format: LBA Format #00 00:23:46.600 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:46.600 00:23:46.600 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:46.600 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:46.600 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.600 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.600 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.600 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:46.600 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:46.600 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:46.600 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:46.600 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:46.600 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:46.600 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:46.600 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:46.860 rmmod nvme_tcp 00:23:46.860 rmmod nvme_fabrics 00:23:46.860 rmmod nvme_keyring 00:23:46.860 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:46.860 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 1734390 ']' 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 1734390 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1734390 ']' 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1734390 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1734390 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1734390' 00:23:46.861 killing process with pid 1734390 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1734390 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1734390 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:46.861 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:23:47.121 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:47.121 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:47.121 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.121 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.121 12:28:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.033 12:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:49.033 00:23:49.033 real 0m11.398s 00:23:49.033 user 0m8.576s 00:23:49.033 sys 0m5.829s 00:23:49.033 12:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:49.033 12:28:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:49.033 ************************************ 00:23:49.033 END TEST nvmf_identify 00:23:49.033 ************************************ 00:23:49.033 12:28:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:49.033 12:28:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:49.033 12:28:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:49.033 12:28:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.033 ************************************ 00:23:49.033 START TEST nvmf_perf 00:23:49.033 ************************************ 00:23:49.033 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:49.293 * Looking for test storage... 00:23:49.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:49.293 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:49.293 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:23:49.293 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:49.293 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:49.293 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:49.293 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:49.293 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:49.293 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:49.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.294 --rc genhtml_branch_coverage=1 00:23:49.294 --rc genhtml_function_coverage=1 00:23:49.294 --rc genhtml_legend=1 00:23:49.294 --rc geninfo_all_blocks=1 00:23:49.294 --rc geninfo_unexecuted_blocks=1 00:23:49.294 00:23:49.294 ' 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:49.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.294 --rc genhtml_branch_coverage=1 00:23:49.294 --rc genhtml_function_coverage=1 00:23:49.294 --rc genhtml_legend=1 00:23:49.294 --rc geninfo_all_blocks=1 00:23:49.294 --rc geninfo_unexecuted_blocks=1 00:23:49.294 00:23:49.294 ' 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:49.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.294 --rc genhtml_branch_coverage=1 00:23:49.294 --rc genhtml_function_coverage=1 00:23:49.294 --rc genhtml_legend=1 00:23:49.294 --rc geninfo_all_blocks=1 00:23:49.294 --rc geninfo_unexecuted_blocks=1 00:23:49.294 00:23:49.294 ' 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:49.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.294 --rc genhtml_branch_coverage=1 00:23:49.294 --rc genhtml_function_coverage=1 00:23:49.294 --rc genhtml_legend=1 00:23:49.294 --rc geninfo_all_blocks=1 00:23:49.294 --rc geninfo_unexecuted_blocks=1 00:23:49.294 00:23:49.294 ' 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:49.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:49.294 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.295 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.295 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.295 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:49.295 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:49.295 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:49.295 12:28:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:57.438 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:57.438 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.438 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:57.439 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:57.439 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:57.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:23:57.439 00:23:57.439 --- 10.0.0.2 ping statistics --- 00:23:57.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.439 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:23:57.439 00:23:57.439 --- 10.0.0.1 ping statistics --- 00:23:57.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.439 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=1738750 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 1738750 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1738750 ']' 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:57.439 12:28:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:57.439 [2024-11-04 12:28:31.050485] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:23:57.439 [2024-11-04 12:28:31.050581] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.439 [2024-11-04 12:28:31.123824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:57.439 [2024-11-04 12:28:31.167125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.439 [2024-11-04 12:28:31.167162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.439 [2024-11-04 12:28:31.167171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.439 [2024-11-04 12:28:31.167178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.439 [2024-11-04 12:28:31.167184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.439 [2024-11-04 12:28:31.169036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.439 [2024-11-04 12:28:31.169152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.439 [2024-11-04 12:28:31.169302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.439 [2024-11-04 12:28:31.169303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:57.439 12:28:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:57.439 12:28:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:23:57.439 12:28:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:57.439 12:28:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:57.439 12:28:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:57.439 12:28:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.439 12:28:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:57.439 12:28:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:58.008 12:28:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:58.008 12:28:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:58.008 12:28:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:23:58.008 12:28:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:58.269 12:28:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:58.269 12:28:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:23:58.269 12:28:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:58.269 12:28:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:58.269 12:28:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:58.530 [2024-11-04 12:28:32.933728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.530 12:28:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:58.791 12:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:58.791 12:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:58.791 12:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:58.791 12:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:59.051 12:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:59.312 [2024-11-04 12:28:33.644406] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.312 12:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:59.312 12:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:23:59.312 12:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:59.312 12:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:59.312 12:28:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:00.836 Initializing NVMe Controllers 00:24:00.836 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:00.836 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:00.836 Initialization complete. Launching workers. 00:24:00.836 ======================================================== 00:24:00.836 Latency(us) 00:24:00.836 Device Information : IOPS MiB/s Average min max 00:24:00.836 PCIE (0000:65:00.0) NSID 1 from core 0: 79354.79 309.98 403.43 13.59 5983.26 00:24:00.836 ======================================================== 00:24:00.836 Total : 79354.79 309.98 403.43 13.59 5983.26 00:24:00.836 00:24:00.836 12:28:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:02.230 Initializing NVMe Controllers 00:24:02.230 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:02.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:02.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:02.230 Initialization complete. Launching workers. 00:24:02.230 ======================================================== 00:24:02.230 Latency(us) 00:24:02.230 Device Information : IOPS MiB/s Average min max 00:24:02.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 62.90 0.25 16532.14 238.07 45604.71 00:24:02.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 59.90 0.23 16826.42 7954.54 55869.49 00:24:02.230 ======================================================== 00:24:02.230 Total : 122.80 0.48 16675.69 238.07 55869.49 00:24:02.230 00:24:02.230 12:28:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:03.613 Initializing NVMe Controllers 00:24:03.613 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:03.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:03.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:03.613 Initialization complete. Launching workers. 00:24:03.613 ======================================================== 00:24:03.613 Latency(us) 00:24:03.613 Device Information : IOPS MiB/s Average min max 00:24:03.613 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10447.98 40.81 3065.22 495.35 10066.74 00:24:03.613 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3733.99 14.59 8626.56 6869.83 23516.12 00:24:03.613 ======================================================== 00:24:03.613 Total : 14181.97 55.40 4529.48 495.35 23516.12 00:24:03.613 00:24:03.613 12:28:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:03.613 12:28:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:03.613 12:28:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:06.153 Initializing NVMe Controllers 00:24:06.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:06.153 Controller IO queue size 128, less than required. 00:24:06.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:06.153 Controller IO queue size 128, less than required. 00:24:06.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:06.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:06.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:06.153 Initialization complete. Launching workers. 00:24:06.153 ======================================================== 00:24:06.153 Latency(us) 00:24:06.153 Device Information : IOPS MiB/s Average min max 00:24:06.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1644.18 411.04 78988.33 49415.09 118992.02 00:24:06.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 609.45 152.36 219673.22 71621.68 308994.97 00:24:06.153 ======================================================== 00:24:06.153 Total : 2253.63 563.41 117034.01 49415.09 308994.97 00:24:06.153 00:24:06.153 12:28:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:06.153 No valid NVMe controllers or AIO or URING devices found 00:24:06.153 Initializing NVMe Controllers 00:24:06.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:06.153 Controller IO queue size 128, less than required. 00:24:06.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:06.153 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:06.153 Controller IO queue size 128, less than required. 00:24:06.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:06.153 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:06.153 WARNING: Some requested NVMe devices were skipped 00:24:06.153 12:28:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:08.695 Initializing NVMe Controllers 00:24:08.695 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:08.695 Controller IO queue size 128, less than required. 00:24:08.695 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:08.695 Controller IO queue size 128, less than required. 00:24:08.695 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:08.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:08.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:08.695 Initialization complete. Launching workers. 00:24:08.695 00:24:08.695 ==================== 00:24:08.695 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:08.695 TCP transport: 00:24:08.695 polls: 20074 00:24:08.695 idle_polls: 11882 00:24:08.695 sock_completions: 8192 00:24:08.695 nvme_completions: 8213 00:24:08.695 submitted_requests: 12398 00:24:08.695 queued_requests: 1 00:24:08.695 00:24:08.695 ==================== 00:24:08.695 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:08.695 TCP transport: 00:24:08.695 polls: 17443 00:24:08.695 idle_polls: 10049 00:24:08.695 sock_completions: 7394 00:24:08.695 nvme_completions: 5957 00:24:08.695 submitted_requests: 9000 00:24:08.695 queued_requests: 1 00:24:08.695 ======================================================== 00:24:08.695 Latency(us) 00:24:08.695 Device Information : IOPS MiB/s Average min max 00:24:08.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2050.54 512.64 63767.01 31053.24 107177.15 00:24:08.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1487.22 371.80 86529.06 38260.12 146728.88 00:24:08.695 ======================================================== 00:24:08.695 Total : 3537.76 884.44 73335.81 31053.24 146728.88 00:24:08.695 00:24:08.695 12:28:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:08.695 12:28:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:08.695 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:08.695 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:08.695 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:08.695 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:08.695 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:08.695 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:08.695 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:08.695 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:08.695 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:08.695 rmmod nvme_tcp 00:24:08.695 rmmod nvme_fabrics 00:24:08.695 rmmod nvme_keyring 00:24:08.695 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:08.695 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:08.695 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:08.695 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 1738750 ']' 00:24:08.695 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 1738750 00:24:08.695 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1738750 ']' 00:24:08.695 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1738750 00:24:08.696 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:24:08.696 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:08.696 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1738750 00:24:08.696 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:08.696 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:08.696 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1738750' 00:24:08.696 killing process with pid 1738750 00:24:08.696 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1738750 00:24:08.696 12:28:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1738750 00:24:11.237 12:28:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:11.237 12:28:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:11.237 12:28:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:11.237 12:28:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:11.237 12:28:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:24:11.237 12:28:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:11.237 12:28:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:24:11.237 12:28:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:11.237 12:28:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:11.237 12:28:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.237 12:28:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.237 12:28:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:13.148 00:24:13.148 real 0m23.726s 00:24:13.148 user 0m57.682s 00:24:13.148 sys 0m8.185s 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:13.148 ************************************ 00:24:13.148 END TEST nvmf_perf 00:24:13.148 ************************************ 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.148 ************************************ 00:24:13.148 START TEST nvmf_fio_host 00:24:13.148 ************************************ 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:13.148 * Looking for test storage... 00:24:13.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:13.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.148 --rc genhtml_branch_coverage=1 00:24:13.148 --rc genhtml_function_coverage=1 00:24:13.148 --rc genhtml_legend=1 00:24:13.148 --rc geninfo_all_blocks=1 00:24:13.148 --rc geninfo_unexecuted_blocks=1 00:24:13.148 00:24:13.148 ' 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:13.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.148 --rc genhtml_branch_coverage=1 00:24:13.148 --rc genhtml_function_coverage=1 00:24:13.148 --rc genhtml_legend=1 00:24:13.148 --rc geninfo_all_blocks=1 00:24:13.148 --rc geninfo_unexecuted_blocks=1 00:24:13.148 00:24:13.148 ' 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:13.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.148 --rc genhtml_branch_coverage=1 00:24:13.148 --rc genhtml_function_coverage=1 00:24:13.148 --rc genhtml_legend=1 00:24:13.148 --rc geninfo_all_blocks=1 00:24:13.148 --rc geninfo_unexecuted_blocks=1 00:24:13.148 00:24:13.148 ' 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:13.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.148 --rc genhtml_branch_coverage=1 00:24:13.148 --rc genhtml_function_coverage=1 00:24:13.148 --rc genhtml_legend=1 00:24:13.148 --rc geninfo_all_blocks=1 00:24:13.148 --rc geninfo_unexecuted_blocks=1 00:24:13.148 00:24:13.148 ' 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.148 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:13.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:13.149 12:28:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:21.286 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:21.286 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:21.286 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:21.287 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:21.287 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.287 12:28:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:21.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:24:21.287 00:24:21.287 --- 10.0.0.2 ping statistics --- 00:24:21.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.287 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:24:21.287 00:24:21.287 --- 10.0.0.1 ping statistics --- 00:24:21.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.287 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1745811 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1745811 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1745811 ']' 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:21.287 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.287 [2024-11-04 12:28:55.141053] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:24:21.287 [2024-11-04 12:28:55.141124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.287 [2024-11-04 12:28:55.213405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:21.287 [2024-11-04 12:28:55.256286] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.287 [2024-11-04 12:28:55.256326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.287 [2024-11-04 12:28:55.256334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.287 [2024-11-04 12:28:55.256341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.287 [2024-11-04 12:28:55.256347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.287 [2024-11-04 12:28:55.258201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.287 [2024-11-04 12:28:55.258323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.287 [2024-11-04 12:28:55.258484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.287 [2024-11-04 12:28:55.258485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:21.548 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:21.548 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:21.548 12:28:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:21.548 [2024-11-04 12:28:56.101950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.809 12:28:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:21.809 12:28:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:21.809 12:28:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.809 12:28:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:21.809 Malloc1 00:24:22.069 12:28:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:22.069 12:28:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:22.329 12:28:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:22.329 [2024-11-04 12:28:56.896469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.589 12:28:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:22.589 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:22.589 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:22.589 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:22.589 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:22.589 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:22.589 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:22.589 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:22.589 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:22.589 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:22.589 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:22.589 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:22.589 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:22.589 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:22.589 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:22.589 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:22.589 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:22.590 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:22.590 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:22.590 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:22.872 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:22.872 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:22.872 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:22.872 12:28:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:23.131 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:23.131 fio-3.35 00:24:23.131 Starting 1 thread 00:24:25.674 00:24:25.674 test: (groupid=0, jobs=1): err= 0: pid=1746364: Mon Nov 4 12:28:59 2024 00:24:25.674 read: IOPS=13.9k, BW=54.1MiB/s (56.7MB/s)(109MiB/2005msec) 00:24:25.674 slat (usec): min=2, max=306, avg= 2.19, stdev= 2.60 00:24:25.674 clat (usec): min=3213, max=9766, avg=5091.42, stdev=401.29 00:24:25.674 lat (usec): min=3216, max=9772, avg=5093.61, stdev=401.59 00:24:25.674 clat percentiles (usec): 00:24:25.674 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:24:25.674 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 00:24:25.674 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5473], 95.00th=[ 5604], 00:24:25.674 | 99.00th=[ 6063], 99.50th=[ 6456], 99.90th=[ 8979], 99.95th=[ 9372], 00:24:25.674 | 99.99th=[ 9634] 00:24:25.674 bw ( KiB/s): min=54312, max=55960, per=100.00%, avg=55432.00, stdev=758.69, samples=4 00:24:25.674 iops : min=13578, max=13990, avg=13858.00, stdev=189.67, samples=4 00:24:25.674 write: IOPS=13.9k, BW=54.2MiB/s (56.8MB/s)(109MiB/2005msec); 0 zone resets 00:24:25.674 slat (usec): min=2, max=277, avg= 2.26, stdev= 1.83 00:24:25.674 clat (usec): min=2408, max=8640, avg=4114.62, stdev=355.67 00:24:25.674 lat (usec): min=2411, max=8642, avg=4116.88, stdev=355.99 00:24:25.674 clat percentiles (usec): 00:24:25.674 | 1.00th=[ 3392], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3884], 00:24:25.674 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:24:25.674 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:24:25.674 | 99.00th=[ 4883], 99.50th=[ 5735], 99.90th=[ 7701], 99.95th=[ 8094], 00:24:25.674 | 99.99th=[ 8586] 00:24:25.674 bw ( KiB/s): min=54688, max=55912, per=100.00%, avg=55464.00, stdev=534.47, samples=4 00:24:25.674 iops : min=13672, max=13978, avg=13866.00, stdev=133.62, samples=4 00:24:25.674 lat (msec) : 4=17.47%, 10=82.53% 00:24:25.674 cpu : usr=74.15%, sys=24.45%, ctx=29, majf=0, minf=8 00:24:25.674 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:25.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:25.674 issued rwts: total=27778,27797,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.674 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:25.674 00:24:25.674 Run status group 0 (all jobs): 00:24:25.674 READ: bw=54.1MiB/s (56.7MB/s), 54.1MiB/s-54.1MiB/s (56.7MB/s-56.7MB/s), io=109MiB (114MB), run=2005-2005msec 00:24:25.674 WRITE: bw=54.2MiB/s (56.8MB/s), 54.2MiB/s-54.2MiB/s (56.8MB/s-56.8MB/s), io=109MiB (114MB), run=2005-2005msec 00:24:25.674 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:25.675 12:28:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:25.675 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:25.675 fio-3.35 00:24:25.675 Starting 1 thread 00:24:28.220 00:24:28.220 test: (groupid=0, jobs=1): err= 0: pid=1747172: Mon Nov 4 12:29:02 2024 00:24:28.220 read: IOPS=9073, BW=142MiB/s (149MB/s)(284MiB/2006msec) 00:24:28.220 slat (usec): min=3, max=114, avg= 3.68, stdev= 1.72 00:24:28.220 clat (usec): min=2143, max=52326, avg=8617.09, stdev=3811.59 00:24:28.220 lat (usec): min=2147, max=52330, avg=8620.77, stdev=3811.67 00:24:28.220 clat percentiles (usec): 00:24:28.220 | 1.00th=[ 4621], 5.00th=[ 5407], 10.00th=[ 5997], 20.00th=[ 6652], 00:24:28.220 | 30.00th=[ 7177], 40.00th=[ 7701], 50.00th=[ 8291], 60.00th=[ 8848], 00:24:28.220 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[10814], 95.00th=[11469], 00:24:28.220 | 99.00th=[13566], 99.50th=[47449], 99.90th=[51119], 99.95th=[51643], 00:24:28.220 | 99.99th=[52167] 00:24:28.220 bw ( KiB/s): min=66976, max=83104, per=49.41%, avg=71728.00, stdev=7627.26, samples=4 00:24:28.220 iops : min= 4186, max= 5194, avg=4483.00, stdev=476.70, samples=4 00:24:28.220 write: IOPS=5359, BW=83.7MiB/s (87.8MB/s)(146MiB/1749msec); 0 zone resets 00:24:28.220 slat (usec): min=39, max=457, avg=41.21, stdev= 9.48 00:24:28.220 clat (usec): min=2167, max=17630, avg=9545.30, stdev=1549.96 00:24:28.220 lat (usec): min=2208, max=17768, avg=9586.51, stdev=1552.62 00:24:28.220 clat percentiles (usec): 00:24:28.220 | 1.00th=[ 6849], 5.00th=[ 7504], 10.00th=[ 7832], 20.00th=[ 8225], 00:24:28.220 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9765], 00:24:28.220 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11469], 95.00th=[12125], 00:24:28.220 | 99.00th=[14484], 99.50th=[15926], 99.90th=[16909], 99.95th=[17433], 00:24:28.220 | 99.99th=[17695] 00:24:28.220 bw ( KiB/s): min=69408, max=86400, per=86.76%, avg=74400.00, stdev=8054.13, samples=4 00:24:28.220 iops : min= 4338, max= 5400, avg=4650.00, stdev=503.38, samples=4 00:24:28.220 lat (msec) : 4=0.23%, 10=73.88%, 20=25.43%, 50=0.31%, 100=0.15% 00:24:28.220 cpu : usr=84.24%, sys=14.16%, ctx=20, majf=0, minf=24 00:24:28.221 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:28.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:28.221 issued rwts: total=18201,9374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.221 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:28.221 00:24:28.221 Run status group 0 (all jobs): 00:24:28.221 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=284MiB (298MB), run=2006-2006msec 00:24:28.221 WRITE: bw=83.7MiB/s (87.8MB/s), 83.7MiB/s-83.7MiB/s (87.8MB/s-87.8MB/s), io=146MiB (154MB), run=1749-1749msec 00:24:28.221 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:28.481 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:28.481 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:28.481 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:28.481 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:28.481 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:28.481 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:28.481 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.481 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:28.481 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.481 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.481 rmmod nvme_tcp 00:24:28.481 rmmod nvme_fabrics 00:24:28.481 rmmod nvme_keyring 00:24:28.481 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.481 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:28.481 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:28.481 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 1745811 ']' 00:24:28.481 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 1745811 00:24:28.481 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1745811 ']' 00:24:28.481 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1745811 00:24:28.482 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:28.482 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.482 12:29:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1745811 00:24:28.482 12:29:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:28.482 12:29:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:28.482 12:29:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1745811' 00:24:28.482 killing process with pid 1745811 00:24:28.482 12:29:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1745811 00:24:28.482 12:29:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1745811 00:24:28.743 12:29:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:28.743 12:29:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:28.743 12:29:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:28.743 12:29:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:28.743 12:29:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:24:28.743 12:29:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:28.743 12:29:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:24:28.743 12:29:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.743 12:29:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.743 12:29:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.743 12:29:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.743 12:29:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:31.290 00:24:31.290 real 0m17.872s 00:24:31.290 user 1m8.652s 00:24:31.290 sys 0m7.445s 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.290 ************************************ 00:24:31.290 END TEST nvmf_fio_host 00:24:31.290 ************************************ 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.290 ************************************ 00:24:31.290 START TEST nvmf_failover 00:24:31.290 ************************************ 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:31.290 * Looking for test storage... 00:24:31.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:31.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.290 --rc genhtml_branch_coverage=1 00:24:31.290 --rc genhtml_function_coverage=1 00:24:31.290 --rc genhtml_legend=1 00:24:31.290 --rc geninfo_all_blocks=1 00:24:31.290 --rc geninfo_unexecuted_blocks=1 00:24:31.290 00:24:31.290 ' 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:31.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.290 --rc genhtml_branch_coverage=1 00:24:31.290 --rc genhtml_function_coverage=1 00:24:31.290 --rc genhtml_legend=1 00:24:31.290 --rc geninfo_all_blocks=1 00:24:31.290 --rc geninfo_unexecuted_blocks=1 00:24:31.290 00:24:31.290 ' 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:31.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.290 --rc genhtml_branch_coverage=1 00:24:31.290 --rc genhtml_function_coverage=1 00:24:31.290 --rc genhtml_legend=1 00:24:31.290 --rc geninfo_all_blocks=1 00:24:31.290 --rc geninfo_unexecuted_blocks=1 00:24:31.290 00:24:31.290 ' 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:31.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.290 --rc genhtml_branch_coverage=1 00:24:31.290 --rc genhtml_function_coverage=1 00:24:31.290 --rc genhtml_legend=1 00:24:31.290 --rc geninfo_all_blocks=1 00:24:31.290 --rc geninfo_unexecuted_blocks=1 00:24:31.290 00:24:31.290 ' 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.290 12:29:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:37.879 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:37.879 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:37.879 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:37.879 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:37.879 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.880 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.880 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.880 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:37.880 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.880 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.880 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:37.880 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:37.880 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.880 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.880 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:37.880 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:37.880 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.880 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:24:38.141 00:24:38.141 --- 10.0.0.2 ping statistics --- 00:24:38.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.141 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:24:38.141 00:24:38.141 --- 10.0.0.1 ping statistics --- 00:24:38.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.141 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=1751817 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 1751817 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1751817 ']' 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:38.141 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:38.142 12:29:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:38.403 [2024-11-04 12:29:12.760028] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:24:38.403 [2024-11-04 12:29:12.760097] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.403 [2024-11-04 12:29:12.847443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:38.403 [2024-11-04 12:29:12.898384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.403 [2024-11-04 12:29:12.898434] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.403 [2024-11-04 12:29:12.898443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.403 [2024-11-04 12:29:12.898450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.403 [2024-11-04 12:29:12.898456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.403 [2024-11-04 12:29:12.900229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.403 [2024-11-04 12:29:12.900372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.403 [2024-11-04 12:29:12.900373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:39.345 12:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:39.345 12:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:39.345 12:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:39.345 12:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:39.345 12:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:39.345 12:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.345 12:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:39.345 [2024-11-04 12:29:13.767410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.345 12:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:39.606 Malloc0 00:24:39.607 12:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:39.867 12:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:39.867 12:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.128 [2024-11-04 12:29:14.532799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.128 12:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:40.389 [2024-11-04 12:29:14.717293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:40.389 12:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:40.389 [2024-11-04 12:29:14.893839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:40.389 12:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:40.389 12:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1752199 00:24:40.389 12:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:40.389 12:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1752199 /var/tmp/bdevperf.sock 00:24:40.389 12:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1752199 ']' 00:24:40.389 12:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:40.389 12:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:40.389 12:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:40.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:40.389 12:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:40.389 12:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:40.651 12:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:40.651 12:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:40.651 12:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:40.911 NVMe0n1 00:24:40.911 12:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:41.171 00:24:41.171 12:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1752263 00:24:41.171 12:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:41.171 12:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:42.557 12:29:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:42.557 [2024-11-04 12:29:16.844434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1131010 is same with the state(6) to be set 00:24:42.557 [2024-11-04 12:29:16.844476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1131010 is same with the state(6) to be set 00:24:42.557 [2024-11-04 12:29:16.844482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1131010 is same with the state(6) to be set 00:24:42.557 [2024-11-04 12:29:16.844487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1131010 is same with the state(6) to be set 00:24:42.557 [2024-11-04 12:29:16.844492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1131010 is same with the state(6) to be set 00:24:42.557 [2024-11-04 12:29:16.844497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1131010 is same with the state(6) to be set 00:24:42.557 [2024-11-04 12:29:16.844501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1131010 is same with the state(6) to be set 00:24:42.557 [2024-11-04 12:29:16.844506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1131010 is same with the state(6) to be set 00:24:42.557 12:29:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:45.855 12:29:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:45.855 00:24:45.855 12:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:45.855 12:29:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:49.150 12:29:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:49.150 [2024-11-04 12:29:23.545736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.150 12:29:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:50.092 12:29:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:50.353 [2024-11-04 12:29:24.739423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 [2024-11-04 12:29:24.739605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1132d10 is same with the state(6) to be set 00:24:50.353 12:29:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1752263 00:24:56.938 { 00:24:56.938 "results": [ 00:24:56.938 { 00:24:56.938 "job": "NVMe0n1", 00:24:56.938 "core_mask": "0x1", 00:24:56.938 "workload": "verify", 00:24:56.938 "status": "finished", 00:24:56.938 "verify_range": { 00:24:56.938 "start": 0, 00:24:56.938 "length": 16384 00:24:56.938 }, 00:24:56.938 "queue_depth": 128, 00:24:56.938 "io_size": 4096, 00:24:56.938 "runtime": 15.008083, 00:24:56.938 "iops": 11094.021801451925, 00:24:56.938 "mibps": 43.33602266192158, 00:24:56.938 "io_failed": 8677, 00:24:56.938 "io_timeout": 0, 00:24:56.938 "avg_latency_us": 10938.335397607372, 00:24:56.938 "min_latency_us": 542.72, 00:24:56.938 "max_latency_us": 20316.16 00:24:56.938 } 00:24:56.938 ], 00:24:56.938 "core_count": 1 00:24:56.938 } 00:24:56.938 12:29:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1752199 00:24:56.938 12:29:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1752199 ']' 00:24:56.938 12:29:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1752199 00:24:56.938 12:29:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:56.938 12:29:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:56.938 12:29:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1752199 00:24:56.938 12:29:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:56.938 12:29:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:56.938 12:29:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1752199' 00:24:56.938 killing process with pid 1752199 00:24:56.938 12:29:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1752199 00:24:56.938 12:29:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1752199 00:24:56.938 12:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:56.938 [2024-11-04 12:29:14.963406] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:24:56.938 [2024-11-04 12:29:14.963465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752199 ] 00:24:56.938 [2024-11-04 12:29:15.023704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.939 [2024-11-04 12:29:15.059293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.939 Running I/O for 15 seconds... 00:24:56.939 11136.00 IOPS, 43.50 MiB/s [2024-11-04T11:29:31.509Z] [2024-11-04 12:29:16.845005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.939 [2024-11-04 12:29:16.845039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.939 [2024-11-04 12:29:16.845681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.939 [2024-11-04 12:29:16.845690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.845698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.845707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.845715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.845725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.845733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.845743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.845756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.845766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.845774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.845784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.845791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.845800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.845808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.845819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.845827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.845837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.845847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.845857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.845865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.845877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.845885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.845897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.845906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.845916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.845923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.845933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.845940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.845950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.845957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.845966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.845974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.845983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.845990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.845999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.940 [2024-11-04 12:29:16.846392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.940 [2024-11-04 12:29:16.846399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.846807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.941 [2024-11-04 12:29:16.846825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.941 [2024-11-04 12:29:16.846843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.941 [2024-11-04 12:29:16.846859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.941 [2024-11-04 12:29:16.846876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.941 [2024-11-04 12:29:16.846893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.941 [2024-11-04 12:29:16.846909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.941 [2024-11-04 12:29:16.846926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.941 [2024-11-04 12:29:16.846943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.941 [2024-11-04 12:29:16.846959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.941 [2024-11-04 12:29:16.846978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.846988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.941 [2024-11-04 12:29:16.846996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.847005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.941 [2024-11-04 12:29:16.847012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.847021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.941 [2024-11-04 12:29:16.847029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.847038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.847046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.847055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.847063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.941 [2024-11-04 12:29:16.847072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.941 [2024-11-04 12:29:16.847079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:16.847089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:16.847096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:16.847105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:16.847112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:16.847122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.942 [2024-11-04 12:29:16.847129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:16.847138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.942 [2024-11-04 12:29:16.847145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:16.847155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.942 [2024-11-04 12:29:16.847162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:16.847171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.942 [2024-11-04 12:29:16.847180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:16.847189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.942 [2024-11-04 12:29:16.847197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:16.847206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.942 [2024-11-04 12:29:16.847213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:16.847234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.942 [2024-11-04 12:29:16.847241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.942 [2024-11-04 12:29:16.847248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96312 len:8 PRP1 0x0 PRP2 0x0 00:24:56.942 [2024-11-04 12:29:16.847256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:16.847294] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1115ee0 was disconnected and freed. reset controller. 00:24:56.942 [2024-11-04 12:29:16.847304] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:56.942 [2024-11-04 12:29:16.847323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.942 [2024-11-04 12:29:16.847332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:16.847340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.942 [2024-11-04 12:29:16.847347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:16.847356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.942 [2024-11-04 12:29:16.847363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:16.847372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.942 [2024-11-04 12:29:16.847379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:16.847386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.942 [2024-11-04 12:29:16.850883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.942 [2024-11-04 12:29:16.850906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f4e30 (9): Bad file descriptor 00:24:56.942 [2024-11-04 12:29:17.023092] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:56.942 10338.50 IOPS, 40.38 MiB/s [2024-11-04T11:29:31.512Z] 10658.00 IOPS, 41.63 MiB/s [2024-11-04T11:29:31.512Z] 10812.50 IOPS, 42.24 MiB/s [2024-11-04T11:29:31.512Z] [2024-11-04 12:29:20.360266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.942 [2024-11-04 12:29:20.360314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.942 [2024-11-04 12:29:20.360703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.942 [2024-11-04 12:29:20.360711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.360720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.360728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.360738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.360750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.360760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.360767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.360776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.360783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.360794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.360802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.360813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.360820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.360831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.360838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.360848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.360855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.360864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.360871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.360880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.360888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.360899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.360907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.360916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.360924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.360933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.360941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.360950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.360957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.360966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.360974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.360983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.360991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.361000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.361009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.361018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.361025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.361034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.361041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.361051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.361058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.361067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.361075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.361085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.361092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.361101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.361108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.361117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.361124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.361134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.361141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.361150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.361157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.361167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.361174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.361183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.361190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.361199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.361206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.361215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.361224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.361234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.361241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.361251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.361258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.361267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.361274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.361283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.943 [2024-11-04 12:29:20.361291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.943 [2024-11-04 12:29:20.361300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.944 [2024-11-04 12:29:20.361307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.944 [2024-11-04 12:29:20.361323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.944 [2024-11-04 12:29:20.361340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.944 [2024-11-04 12:29:20.361356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.944 [2024-11-04 12:29:20.361372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.944 [2024-11-04 12:29:20.361389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.944 [2024-11-04 12:29:20.361406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54600 len:8 PRP1 0x0 PRP2 0x0 00:24:56.944 [2024-11-04 12:29:20.361446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.944 [2024-11-04 12:29:20.361462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54608 len:8 PRP1 0x0 PRP2 0x0 00:24:56.944 [2024-11-04 12:29:20.361476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.944 [2024-11-04 12:29:20.361489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54616 len:8 PRP1 0x0 PRP2 0x0 00:24:56.944 [2024-11-04 12:29:20.361504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.944 [2024-11-04 12:29:20.361518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54624 len:8 PRP1 0x0 PRP2 0x0 00:24:56.944 [2024-11-04 12:29:20.361532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.944 [2024-11-04 12:29:20.361545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54632 len:8 PRP1 0x0 PRP2 0x0 00:24:56.944 [2024-11-04 12:29:20.361560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.944 [2024-11-04 12:29:20.361574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54640 len:8 PRP1 0x0 PRP2 0x0 00:24:56.944 [2024-11-04 12:29:20.361587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.944 [2024-11-04 12:29:20.361600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54648 len:8 PRP1 0x0 PRP2 0x0 00:24:56.944 [2024-11-04 12:29:20.361614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.944 [2024-11-04 12:29:20.361627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54656 len:8 PRP1 0x0 PRP2 0x0 00:24:56.944 [2024-11-04 12:29:20.361641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.944 [2024-11-04 12:29:20.361654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54664 len:8 PRP1 0x0 PRP2 0x0 00:24:56.944 [2024-11-04 12:29:20.361670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.944 [2024-11-04 12:29:20.361684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54672 len:8 PRP1 0x0 PRP2 0x0 00:24:56.944 [2024-11-04 12:29:20.361697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.944 [2024-11-04 12:29:20.361711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54680 len:8 PRP1 0x0 PRP2 0x0 00:24:56.944 [2024-11-04 12:29:20.361725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.944 [2024-11-04 12:29:20.361738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54688 len:8 PRP1 0x0 PRP2 0x0 00:24:56.944 [2024-11-04 12:29:20.361755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.944 [2024-11-04 12:29:20.361768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54696 len:8 PRP1 0x0 PRP2 0x0 00:24:56.944 [2024-11-04 12:29:20.361782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.944 [2024-11-04 12:29:20.361796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54704 len:8 PRP1 0x0 PRP2 0x0 00:24:56.944 [2024-11-04 12:29:20.361809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.944 [2024-11-04 12:29:20.361822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54712 len:8 PRP1 0x0 PRP2 0x0 00:24:56.944 [2024-11-04 12:29:20.361836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.944 [2024-11-04 12:29:20.361850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54720 len:8 PRP1 0x0 PRP2 0x0 00:24:56.944 [2024-11-04 12:29:20.361863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.944 [2024-11-04 12:29:20.361878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53976 len:8 PRP1 0x0 PRP2 0x0 00:24:56.944 [2024-11-04 12:29:20.361893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.944 [2024-11-04 12:29:20.361907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53984 len:8 PRP1 0x0 PRP2 0x0 00:24:56.944 [2024-11-04 12:29:20.361920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.944 [2024-11-04 12:29:20.361934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53992 len:8 PRP1 0x0 PRP2 0x0 00:24:56.944 [2024-11-04 12:29:20.361947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.944 [2024-11-04 12:29:20.361955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.944 [2024-11-04 12:29:20.361961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.944 [2024-11-04 12:29:20.361968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54000 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.361975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.361982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.361988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.361994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54008 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54016 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54024 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54728 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54736 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54744 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54752 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54760 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54768 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54776 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54784 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54792 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54800 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54808 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54816 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54824 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54832 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54840 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54848 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54856 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54864 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54872 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.945 [2024-11-04 12:29:20.362602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.945 [2024-11-04 12:29:20.362608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54880 len:8 PRP1 0x0 PRP2 0x0 00:24:56.945 [2024-11-04 12:29:20.362616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.945 [2024-11-04 12:29:20.362624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.362629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.362636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54888 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.362644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.362652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.362657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.362664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54896 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.362671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.362679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.362684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.373607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54904 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.373636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.373651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.373658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.373665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54912 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.373672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.373685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.373692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.373698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54920 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.373705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.373713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.373718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.373724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54928 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.373733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.373740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.373761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.373768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54936 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.373776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.373783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.373790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.373796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54944 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.373804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.373812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.373818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.373824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54952 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.373831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.373839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.373845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.373851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54960 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.373858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.373866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.373871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.373877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54968 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.373885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.373892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.373898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.373905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54976 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.373914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.373922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.373927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.373934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54984 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.373943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.373951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.373957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.373963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54032 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.373970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.373977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.373983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.373989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54040 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.373997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.374005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.374010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.374016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54048 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.374024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.374031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.374037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.374043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54056 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.374051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.374058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.374064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.374070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54064 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.374077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.374084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.374089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.374096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54072 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.374103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.374111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.374116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.374123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54080 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.374130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.374138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.946 [2024-11-04 12:29:20.374143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.946 [2024-11-04 12:29:20.374149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54088 len:8 PRP1 0x0 PRP2 0x0 00:24:56.946 [2024-11-04 12:29:20.374159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.374195] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1117f90 was disconnected and freed. reset controller. 00:24:56.946 [2024-11-04 12:29:20.374206] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:56.946 [2024-11-04 12:29:20.374235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.946 [2024-11-04 12:29:20.374244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.374254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.946 [2024-11-04 12:29:20.374262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.374270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.946 [2024-11-04 12:29:20.374278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.946 [2024-11-04 12:29:20.374286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.946 [2024-11-04 12:29:20.374293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:20.374300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.947 [2024-11-04 12:29:20.374329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f4e30 (9): Bad file descriptor 00:24:56.947 [2024-11-04 12:29:20.377831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.947 [2024-11-04 12:29:20.413101] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:56.947 10796.60 IOPS, 42.17 MiB/s [2024-11-04T11:29:31.517Z] 10928.33 IOPS, 42.69 MiB/s [2024-11-04T11:29:31.517Z] 10961.00 IOPS, 42.82 MiB/s [2024-11-04T11:29:31.517Z] 10970.38 IOPS, 42.85 MiB/s [2024-11-04T11:29:31.517Z] [2024-11-04 12:29:24.740780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.740815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.740831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.740840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.740850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.740858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.740868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.740879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.740891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.740899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.740908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.740916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.740925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.740933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.740942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.740950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.740960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.740967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.740976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.740983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.740993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.947 [2024-11-04 12:29:24.741377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.947 [2024-11-04 12:29:24.741386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.741989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.741999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.742008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.742018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.742027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.742038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.742046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.742057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.742066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.742077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.948 [2024-11-04 12:29:24.742085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.948 [2024-11-04 12:29:24.742095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.949 [2024-11-04 12:29:24.742774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.949 [2024-11-04 12:29:24.742782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.950 [2024-11-04 12:29:24.742792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.950 [2024-11-04 12:29:24.742799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.950 [2024-11-04 12:29:24.742808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.950 [2024-11-04 12:29:24.742815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.950 [2024-11-04 12:29:24.742824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.950 [2024-11-04 12:29:24.742832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.950 [2024-11-04 12:29:24.742842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.950 [2024-11-04 12:29:24.742850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.950 [2024-11-04 12:29:24.742859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.950 [2024-11-04 12:29:24.742866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.950 [2024-11-04 12:29:24.742875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.950 [2024-11-04 12:29:24.742884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.950 [2024-11-04 12:29:24.742893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.950 [2024-11-04 12:29:24.742900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.950 [2024-11-04 12:29:24.742909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.950 [2024-11-04 12:29:24.742916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.950 [2024-11-04 12:29:24.742926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.950 [2024-11-04 12:29:24.742933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.950 [2024-11-04 12:29:24.742943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.950 [2024-11-04 12:29:24.742950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.950 [2024-11-04 12:29:24.742959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:56.950 [2024-11-04 12:29:24.742966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.950 [2024-11-04 12:29:24.742987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.950 [2024-11-04 12:29:24.742994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60784 len:8 PRP1 0x0 PRP2 0x0 00:24:56.950 [2024-11-04 12:29:24.743002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.950 [2024-11-04 12:29:24.743013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.950 [2024-11-04 12:29:24.743018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.950 [2024-11-04 12:29:24.743024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60792 len:8 PRP1 0x0 PRP2 0x0 00:24:56.950 [2024-11-04 12:29:24.743031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.950 [2024-11-04 12:29:24.743040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:56.950 [2024-11-04 12:29:24.743047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:56.950 [2024-11-04 12:29:24.743053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60800 len:8 PRP1 0x0 PRP2 0x0 00:24:56.950 [2024-11-04 12:29:24.743060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.950 [2024-11-04 12:29:24.743098] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1117c50 was disconnected and freed. reset controller. 00:24:56.950 [2024-11-04 12:29:24.743108] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:56.950 [2024-11-04 12:29:24.743128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.950 [2024-11-04 12:29:24.743136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.950 [2024-11-04 12:29:24.743145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.950 [2024-11-04 12:29:24.743153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.950 [2024-11-04 12:29:24.743162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.950 [2024-11-04 12:29:24.743169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.950 [2024-11-04 12:29:24.743178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.950 [2024-11-04 12:29:24.743185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.950 [2024-11-04 12:29:24.743192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.950 [2024-11-04 12:29:24.743216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f4e30 (9): Bad file descriptor 00:24:56.950 [2024-11-04 12:29:24.746739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.950 [2024-11-04 12:29:24.787029] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:56.950 10949.56 IOPS, 42.77 MiB/s [2024-11-04T11:29:31.520Z] 10978.90 IOPS, 42.89 MiB/s [2024-11-04T11:29:31.520Z] 10980.09 IOPS, 42.89 MiB/s [2024-11-04T11:29:31.520Z] 10999.92 IOPS, 42.97 MiB/s [2024-11-04T11:29:31.520Z] 11066.54 IOPS, 43.23 MiB/s [2024-11-04T11:29:31.520Z] 11073.36 IOPS, 43.26 MiB/s 00:24:56.950 Latency(us) 00:24:56.950 [2024-11-04T11:29:31.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.950 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:56.950 Verification LBA range: start 0x0 length 0x4000 00:24:56.950 NVMe0n1 : 15.01 11094.02 43.34 578.16 0.00 10938.34 542.72 20316.16 00:24:56.950 [2024-11-04T11:29:31.520Z] =================================================================================================================== 00:24:56.950 [2024-11-04T11:29:31.520Z] Total : 11094.02 43.34 578.16 0.00 10938.34 542.72 20316.16 00:24:56.950 Received shutdown signal, test time was about 15.000000 seconds 00:24:56.950 00:24:56.950 Latency(us) 00:24:56.950 [2024-11-04T11:29:31.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.950 [2024-11-04T11:29:31.520Z] =================================================================================================================== 00:24:56.950 [2024-11-04T11:29:31.520Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:56.950 12:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:56.950 12:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:56.950 12:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:56.950 12:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1755219 00:24:56.950 12:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1755219 /var/tmp/bdevperf.sock 00:24:56.950 12:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:56.950 12:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1755219 ']' 00:24:56.950 12:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:56.950 12:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:56.950 12:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:56.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:56.950 12:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:56.950 12:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:56.950 12:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:56.950 12:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:56.950 12:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:56.950 [2024-11-04 12:29:31.424865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:56.950 12:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:57.211 [2024-11-04 12:29:31.609281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:57.211 12:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:57.472 NVMe0n1 00:24:57.472 12:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:57.733 00:24:57.733 12:29:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:57.994 00:24:57.994 12:29:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:57.994 12:29:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:58.255 12:29:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:58.517 12:29:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:01.818 12:29:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:01.818 12:29:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:01.818 12:29:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:01.818 12:29:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1756231 00:25:01.818 12:29:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1756231 00:25:02.874 { 00:25:02.874 "results": [ 00:25:02.874 { 00:25:02.874 "job": "NVMe0n1", 00:25:02.874 "core_mask": "0x1", 00:25:02.874 "workload": "verify", 00:25:02.874 "status": "finished", 00:25:02.874 "verify_range": { 00:25:02.874 "start": 0, 00:25:02.874 "length": 16384 00:25:02.874 }, 00:25:02.874 "queue_depth": 128, 00:25:02.874 "io_size": 4096, 00:25:02.874 "runtime": 1.012252, 00:25:02.874 "iops": 11048.632158790499, 00:25:02.874 "mibps": 43.158719370275385, 00:25:02.874 "io_failed": 0, 00:25:02.874 "io_timeout": 0, 00:25:02.874 "avg_latency_us": 11528.327629947544, 00:25:02.874 "min_latency_us": 2580.48, 00:25:02.874 "max_latency_us": 12124.16 00:25:02.874 } 00:25:02.874 ], 00:25:02.874 "core_count": 1 00:25:02.874 } 00:25:02.874 12:29:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:02.874 [2024-11-04 12:29:31.086844] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:25:02.874 [2024-11-04 12:29:31.086902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755219 ] 00:25:02.874 [2024-11-04 12:29:31.148221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.874 [2024-11-04 12:29:31.183486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.874 [2024-11-04 12:29:32.809505] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:02.874 [2024-11-04 12:29:32.809549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.874 [2024-11-04 12:29:32.809561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.874 [2024-11-04 12:29:32.809572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.874 [2024-11-04 12:29:32.809579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.874 [2024-11-04 12:29:32.809588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.874 [2024-11-04 12:29:32.809595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.874 [2024-11-04 12:29:32.809603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.874 [2024-11-04 12:29:32.809611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.874 [2024-11-04 12:29:32.809619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.874 [2024-11-04 12:29:32.809647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.874 [2024-11-04 12:29:32.809662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1beee30 (9): Bad file descriptor 00:25:02.874 [2024-11-04 12:29:32.942936] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:02.874 Running I/O for 1 seconds... 00:25:02.874 10998.00 IOPS, 42.96 MiB/s 00:25:02.874 Latency(us) 00:25:02.874 [2024-11-04T11:29:37.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.874 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:02.874 Verification LBA range: start 0x0 length 0x4000 00:25:02.874 NVMe0n1 : 1.01 11048.63 43.16 0.00 0.00 11528.33 2580.48 12124.16 00:25:02.874 [2024-11-04T11:29:37.444Z] =================================================================================================================== 00:25:02.874 [2024-11-04T11:29:37.444Z] Total : 11048.63 43.16 0.00 0.00 11528.33 2580.48 12124.16 00:25:02.874 12:29:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:02.874 12:29:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:02.874 12:29:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:03.135 12:29:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:03.135 12:29:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:03.135 12:29:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:03.394 12:29:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:06.694 12:29:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:06.694 12:29:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:06.694 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1755219 00:25:06.694 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1755219 ']' 00:25:06.694 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1755219 00:25:06.694 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:06.694 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:06.694 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1755219 00:25:06.694 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:06.694 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:06.694 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1755219' 00:25:06.694 killing process with pid 1755219 00:25:06.694 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1755219 00:25:06.694 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1755219 00:25:06.694 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:06.694 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.955 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:06.955 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:06.955 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:06.955 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:06.955 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:06.955 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:06.955 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:06.955 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:06.955 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:06.955 rmmod nvme_tcp 00:25:06.955 rmmod nvme_fabrics 00:25:06.955 rmmod nvme_keyring 00:25:06.955 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:06.955 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:06.955 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:06.955 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 1751817 ']' 00:25:06.955 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 1751817 00:25:06.955 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1751817 ']' 00:25:06.955 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1751817 00:25:06.955 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:06.955 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:06.955 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1751817 00:25:07.216 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:07.216 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:07.216 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1751817' 00:25:07.216 killing process with pid 1751817 00:25:07.216 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1751817 00:25:07.216 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1751817 00:25:07.216 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:07.216 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:07.216 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:07.216 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:07.216 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:25:07.216 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:07.216 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:25:07.216 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:07.216 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:07.216 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.216 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.216 12:29:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.756 12:29:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:09.756 00:25:09.756 real 0m38.418s 00:25:09.756 user 1m57.557s 00:25:09.756 sys 0m8.313s 00:25:09.756 12:29:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:09.756 12:29:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:09.756 ************************************ 00:25:09.756 END TEST nvmf_failover 00:25:09.756 ************************************ 00:25:09.756 12:29:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:09.756 12:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:09.756 12:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:09.756 12:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.756 ************************************ 00:25:09.756 START TEST nvmf_host_discovery 00:25:09.756 ************************************ 00:25:09.756 12:29:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:09.756 * Looking for test storage... 00:25:09.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.756 12:29:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:09.756 12:29:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:25:09.756 12:29:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:09.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.756 --rc genhtml_branch_coverage=1 00:25:09.756 --rc genhtml_function_coverage=1 00:25:09.756 --rc genhtml_legend=1 00:25:09.756 --rc geninfo_all_blocks=1 00:25:09.756 --rc geninfo_unexecuted_blocks=1 00:25:09.756 00:25:09.756 ' 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:09.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.756 --rc genhtml_branch_coverage=1 00:25:09.756 --rc genhtml_function_coverage=1 00:25:09.756 --rc genhtml_legend=1 00:25:09.756 --rc geninfo_all_blocks=1 00:25:09.756 --rc geninfo_unexecuted_blocks=1 00:25:09.756 00:25:09.756 ' 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:09.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.756 --rc genhtml_branch_coverage=1 00:25:09.756 --rc genhtml_function_coverage=1 00:25:09.756 --rc genhtml_legend=1 00:25:09.756 --rc geninfo_all_blocks=1 00:25:09.756 --rc geninfo_unexecuted_blocks=1 00:25:09.756 00:25:09.756 ' 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:09.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.756 --rc genhtml_branch_coverage=1 00:25:09.756 --rc genhtml_function_coverage=1 00:25:09.756 --rc genhtml_legend=1 00:25:09.756 --rc geninfo_all_blocks=1 00:25:09.756 --rc geninfo_unexecuted_blocks=1 00:25:09.756 00:25:09.756 ' 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.756 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:09.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:09.757 12:29:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:17.888 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:17.888 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:17.888 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:17.888 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:17.888 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.889 12:29:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:17.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:25:17.889 00:25:17.889 --- 10.0.0.2 ping statistics --- 00:25:17.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.889 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:25:17.889 00:25:17.889 --- 10.0.0.1 ping statistics --- 00:25:17.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.889 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=1761527 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 1761527 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1761527 ']' 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:17.889 12:29:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.889 [2024-11-04 12:29:51.341270] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:25:17.889 [2024-11-04 12:29:51.341339] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.889 [2024-11-04 12:29:51.429600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.889 [2024-11-04 12:29:51.479908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.889 [2024-11-04 12:29:51.479959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.889 [2024-11-04 12:29:51.479967] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.889 [2024-11-04 12:29:51.479974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.889 [2024-11-04 12:29:51.479980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.889 [2024-11-04 12:29:51.480782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.889 [2024-11-04 12:29:52.226328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.889 [2024-11-04 12:29:52.234487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.889 null0 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.889 null1 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1761603 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1761603 /tmp/host.sock 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1761603 ']' 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:17.889 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:17.889 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.889 [2024-11-04 12:29:52.297907] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:25:17.889 [2024-11-04 12:29:52.297956] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1761603 ] 00:25:17.889 [2024-11-04 12:29:52.356643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.889 [2024-11-04 12:29:52.393840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:18.151 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.152 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:18.152 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:18.152 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.152 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.152 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.152 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:18.152 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:18.152 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:18.152 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.152 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:18.152 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.152 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.413 [2024-11-04 12:29:52.815985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:18.413 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:18.673 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:18.673 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.673 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:18.673 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.673 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:18.673 12:29:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.673 12:29:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:25:18.673 12:29:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:19.243 [2024-11-04 12:29:53.543235] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:19.243 [2024-11-04 12:29:53.543255] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:19.243 [2024-11-04 12:29:53.543268] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:19.243 [2024-11-04 12:29:53.673690] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:19.503 [2024-11-04 12:29:53.898775] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:19.503 [2024-11-04 12:29:53.898799] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:19.503 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:19.503 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:19.503 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:19.503 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:19.503 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.503 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:19.503 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:19.503 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.503 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:19.503 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:19.763 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.764 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.024 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:20.024 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:20.024 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:20.024 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:20.024 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:20.024 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.024 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.024 [2024-11-04 12:29:54.352111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:20.024 [2024-11-04 12:29:54.352488] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:20.024 [2024-11-04 12:29:54.352514] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:20.024 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.024 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:20.024 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:20.024 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:20.024 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:20.024 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:20.024 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:20.024 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:20.024 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.024 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.025 [2024-11-04 12:29:54.483933] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:20.025 12:29:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:20.025 [2024-11-04 12:29:54.588858] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:20.025 [2024-11-04 12:29:54.588877] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:20.025 [2024-11-04 12:29:54.588883] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:20.963 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:20.963 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:20.963 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:20.963 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:20.963 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:20.963 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:20.963 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.963 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:20.963 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.224 [2024-11-04 12:29:55.603818] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:21.224 [2024-11-04 12:29:55.603841] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:21.224 [2024-11-04 12:29:55.610363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.224 [2024-11-04 12:29:55.610382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.224 [2024-11-04 12:29:55.610391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.224 [2024-11-04 12:29:55.610404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.224 [2024-11-04 12:29:55.610412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.224 [2024-11-04 12:29:55.610420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.224 [2024-11-04 12:29:55.610428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.224 [2024-11-04 12:29:55.610435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.224 [2024-11-04 12:29:55.610442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8bed0 is same with the state(6) to be set 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:21.224 [2024-11-04 12:29:55.620378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8bed0 (9): Bad file descriptor 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.224 [2024-11-04 12:29:55.630418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:21.224 [2024-11-04 12:29:55.630754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-11-04 12:29:55.630770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8bed0 with addr=10.0.0.2, port=4420 00:25:21.224 [2024-11-04 12:29:55.630779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8bed0 is same with the state(6) to be set 00:25:21.224 [2024-11-04 12:29:55.630790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8bed0 (9): Bad file descriptor 00:25:21.224 [2024-11-04 12:29:55.630801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:21.224 [2024-11-04 12:29:55.630808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:21.224 [2024-11-04 12:29:55.630815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:21.224 [2024-11-04 12:29:55.630828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.224 [2024-11-04 12:29:55.640474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:21.224 [2024-11-04 12:29:55.640975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-11-04 12:29:55.641012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8bed0 with addr=10.0.0.2, port=4420 00:25:21.224 [2024-11-04 12:29:55.641023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8bed0 is same with the state(6) to be set 00:25:21.224 [2024-11-04 12:29:55.641041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8bed0 (9): Bad file descriptor 00:25:21.224 [2024-11-04 12:29:55.641068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:21.224 [2024-11-04 12:29:55.641077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:21.224 [2024-11-04 12:29:55.641090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:21.224 [2024-11-04 12:29:55.641106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.224 [2024-11-04 12:29:55.650528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:21.224 [2024-11-04 12:29:55.650960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-11-04 12:29:55.650998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8bed0 with addr=10.0.0.2, port=4420 00:25:21.224 [2024-11-04 12:29:55.651009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8bed0 is same with the state(6) to be set 00:25:21.224 [2024-11-04 12:29:55.651028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8bed0 (9): Bad file descriptor 00:25:21.224 [2024-11-04 12:29:55.651040] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:21.224 [2024-11-04 12:29:55.651047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:21.224 [2024-11-04 12:29:55.651055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:21.224 [2024-11-04 12:29:55.651070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.224 [2024-11-04 12:29:55.660588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:21.224 [2024-11-04 12:29:55.660971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-11-04 12:29:55.661009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8bed0 with addr=10.0.0.2, port=4420 00:25:21.224 [2024-11-04 12:29:55.661021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8bed0 is same with the state(6) to be set 00:25:21.224 [2024-11-04 12:29:55.661040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8bed0 (9): Bad file descriptor 00:25:21.224 [2024-11-04 12:29:55.661052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:21.224 [2024-11-04 12:29:55.661059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:21.224 [2024-11-04 12:29:55.661067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:21.224 [2024-11-04 12:29:55.661083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:21.224 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.225 [2024-11-04 12:29:55.670644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:21.225 [2024-11-04 12:29:55.670963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-11-04 12:29:55.670977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8bed0 with addr=10.0.0.2, port=4420 00:25:21.225 [2024-11-04 12:29:55.670986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8bed0 is same with the state(6) to be set 00:25:21.225 [2024-11-04 12:29:55.670998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8bed0 (9): Bad file descriptor 00:25:21.225 [2024-11-04 12:29:55.671008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:21.225 [2024-11-04 12:29:55.671015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:21.225 [2024-11-04 12:29:55.671022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:21.225 [2024-11-04 12:29:55.671033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.225 [2024-11-04 12:29:55.680703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:21.225 [2024-11-04 12:29:55.681039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-11-04 12:29:55.681053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8bed0 with addr=10.0.0.2, port=4420 00:25:21.225 [2024-11-04 12:29:55.681061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8bed0 is same with the state(6) to be set 00:25:21.225 [2024-11-04 12:29:55.681072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8bed0 (9): Bad file descriptor 00:25:21.225 [2024-11-04 12:29:55.681083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:21.225 [2024-11-04 12:29:55.681089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:21.225 [2024-11-04 12:29:55.681096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:21.225 [2024-11-04 12:29:55.681107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.225 [2024-11-04 12:29:55.690764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:21.225 [2024-11-04 12:29:55.691075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-11-04 12:29:55.691087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8bed0 with addr=10.0.0.2, port=4420 00:25:21.225 [2024-11-04 12:29:55.691095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8bed0 is same with the state(6) to be set 00:25:21.225 [2024-11-04 12:29:55.691106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8bed0 (9): Bad file descriptor 00:25:21.225 [2024-11-04 12:29:55.691116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:21.225 [2024-11-04 12:29:55.691122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:21.225 [2024-11-04 12:29:55.691129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:21.225 [2024-11-04 12:29:55.691139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.225 [2024-11-04 12:29:55.691680] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:21.225 [2024-11-04 12:29:55.691697] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:21.225 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:21.485 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:21.486 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:21.486 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:21.486 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:21.486 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:21.486 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.486 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.486 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.486 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:21.486 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:21.486 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:21.486 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:21.486 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:21.486 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.486 12:29:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.868 [2024-11-04 12:29:57.052638] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:22.868 [2024-11-04 12:29:57.052656] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:22.868 [2024-11-04 12:29:57.052669] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:22.868 [2024-11-04 12:29:57.180099] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:23.130 [2024-11-04 12:29:57.449955] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:23.130 [2024-11-04 12:29:57.449985] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.130 request: 00:25:23.130 { 00:25:23.130 "name": "nvme", 00:25:23.130 "trtype": "tcp", 00:25:23.130 "traddr": "10.0.0.2", 00:25:23.130 "adrfam": "ipv4", 00:25:23.130 "trsvcid": "8009", 00:25:23.130 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:23.130 "wait_for_attach": true, 00:25:23.130 "method": "bdev_nvme_start_discovery", 00:25:23.130 "req_id": 1 00:25:23.130 } 00:25:23.130 Got JSON-RPC error response 00:25:23.130 response: 00:25:23.130 { 00:25:23.130 "code": -17, 00:25:23.130 "message": "File exists" 00:25:23.130 } 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.130 request: 00:25:23.130 { 00:25:23.130 "name": "nvme_second", 00:25:23.130 "trtype": "tcp", 00:25:23.130 "traddr": "10.0.0.2", 00:25:23.130 "adrfam": "ipv4", 00:25:23.130 "trsvcid": "8009", 00:25:23.130 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:23.130 "wait_for_attach": true, 00:25:23.130 "method": "bdev_nvme_start_discovery", 00:25:23.130 "req_id": 1 00:25:23.130 } 00:25:23.130 Got JSON-RPC error response 00:25:23.130 response: 00:25:23.130 { 00:25:23.130 "code": -17, 00:25:23.130 "message": "File exists" 00:25:23.130 } 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:23.130 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.131 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:23.131 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.131 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:23.131 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.131 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:23.131 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.131 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:23.131 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:23.131 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:23.131 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:23.131 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:23.391 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.391 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:23.391 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.391 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:23.391 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.391 12:29:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.334 [2024-11-04 12:29:58.709426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:24.334 [2024-11-04 12:29:58.709456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8a200 with addr=10.0.0.2, port=8010 00:25:24.334 [2024-11-04 12:29:58.709469] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:24.334 [2024-11-04 12:29:58.709477] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:24.334 [2024-11-04 12:29:58.709483] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:25.276 [2024-11-04 12:29:59.711803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:25.276 [2024-11-04 12:29:59.711827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8a200 with addr=10.0.0.2, port=8010 00:25:25.276 [2024-11-04 12:29:59.711839] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:25.276 [2024-11-04 12:29:59.711846] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:25.276 [2024-11-04 12:29:59.711852] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:26.217 [2024-11-04 12:30:00.713799] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:26.217 request: 00:25:26.217 { 00:25:26.217 "name": "nvme_second", 00:25:26.217 "trtype": "tcp", 00:25:26.217 "traddr": "10.0.0.2", 00:25:26.217 "adrfam": "ipv4", 00:25:26.217 "trsvcid": "8010", 00:25:26.217 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:26.217 "wait_for_attach": false, 00:25:26.217 "attach_timeout_ms": 3000, 00:25:26.217 "method": "bdev_nvme_start_discovery", 00:25:26.217 "req_id": 1 00:25:26.217 } 00:25:26.217 Got JSON-RPC error response 00:25:26.217 response: 00:25:26.217 { 00:25:26.217 "code": -110, 00:25:26.217 "message": "Connection timed out" 00:25:26.217 } 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1761603 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:26.217 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:26.217 rmmod nvme_tcp 00:25:26.478 rmmod nvme_fabrics 00:25:26.478 rmmod nvme_keyring 00:25:26.478 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:26.478 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:26.478 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:26.478 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 1761527 ']' 00:25:26.478 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 1761527 00:25:26.478 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1761527 ']' 00:25:26.478 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1761527 00:25:26.478 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:25:26.478 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:26.478 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1761527 00:25:26.478 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:26.478 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:26.478 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1761527' 00:25:26.478 killing process with pid 1761527 00:25:26.478 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1761527 00:25:26.478 12:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1761527 00:25:26.478 12:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:26.478 12:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:26.478 12:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:26.478 12:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:26.478 12:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:25:26.478 12:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:26.478 12:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:25:26.478 12:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:26.478 12:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:26.478 12:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.478 12:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.478 12:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.024 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:29.024 00:25:29.024 real 0m19.270s 00:25:29.024 user 0m21.956s 00:25:29.024 sys 0m6.847s 00:25:29.024 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:29.024 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.024 ************************************ 00:25:29.024 END TEST nvmf_host_discovery 00:25:29.024 ************************************ 00:25:29.024 12:30:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:29.024 12:30:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:29.024 12:30:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:29.024 12:30:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.024 ************************************ 00:25:29.024 START TEST nvmf_host_multipath_status 00:25:29.024 ************************************ 00:25:29.024 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:29.024 * Looking for test storage... 00:25:29.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:29.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.025 --rc genhtml_branch_coverage=1 00:25:29.025 --rc genhtml_function_coverage=1 00:25:29.025 --rc genhtml_legend=1 00:25:29.025 --rc geninfo_all_blocks=1 00:25:29.025 --rc geninfo_unexecuted_blocks=1 00:25:29.025 00:25:29.025 ' 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:29.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.025 --rc genhtml_branch_coverage=1 00:25:29.025 --rc genhtml_function_coverage=1 00:25:29.025 --rc genhtml_legend=1 00:25:29.025 --rc geninfo_all_blocks=1 00:25:29.025 --rc geninfo_unexecuted_blocks=1 00:25:29.025 00:25:29.025 ' 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:29.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.025 --rc genhtml_branch_coverage=1 00:25:29.025 --rc genhtml_function_coverage=1 00:25:29.025 --rc genhtml_legend=1 00:25:29.025 --rc geninfo_all_blocks=1 00:25:29.025 --rc geninfo_unexecuted_blocks=1 00:25:29.025 00:25:29.025 ' 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:29.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.025 --rc genhtml_branch_coverage=1 00:25:29.025 --rc genhtml_function_coverage=1 00:25:29.025 --rc genhtml_legend=1 00:25:29.025 --rc geninfo_all_blocks=1 00:25:29.025 --rc geninfo_unexecuted_blocks=1 00:25:29.025 00:25:29.025 ' 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:29.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:29.025 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:29.026 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:29.026 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:29.026 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:29.026 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:29.026 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:29.026 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:29.026 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:29.026 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:29.026 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:29.026 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:29.026 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:29.026 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.026 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.026 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.026 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:29.026 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:29.026 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:29.026 12:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:35.609 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:35.609 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:35.609 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:35.609 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:35.609 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:35.610 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:35.610 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:35.610 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:35.610 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:35.610 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:35.610 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:35.610 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:35.610 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:35.610 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:35.610 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:35.610 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:35.610 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:35.870 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:35.870 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:35.870 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:35.870 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:35.870 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:35.870 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:36.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:25:36.131 00:25:36.131 --- 10.0.0.2 ping statistics --- 00:25:36.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.131 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:36.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:25:36.131 00:25:36.131 --- 10.0.0.1 ping statistics --- 00:25:36.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.131 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=1767915 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 1767915 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1767915 ']' 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:36.131 12:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:36.131 [2024-11-04 12:30:10.563787] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:25:36.131 [2024-11-04 12:30:10.563862] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.131 [2024-11-04 12:30:10.636410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:36.131 [2024-11-04 12:30:10.679730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.131 [2024-11-04 12:30:10.679781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.131 [2024-11-04 12:30:10.679789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.131 [2024-11-04 12:30:10.679800] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.131 [2024-11-04 12:30:10.679806] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.131 [2024-11-04 12:30:10.681298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.131 [2024-11-04 12:30:10.681298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.072 12:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:37.072 12:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:37.072 12:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:37.072 12:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:37.072 12:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:37.072 12:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.072 12:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1767915 00:25:37.072 12:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:37.072 [2024-11-04 12:30:11.524836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.072 12:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:37.332 Malloc0 00:25:37.332 12:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:37.333 12:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:37.592 12:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:37.854 [2024-11-04 12:30:12.200039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.854 12:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:37.854 [2024-11-04 12:30:12.368409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:37.854 12:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1768544 00:25:37.854 12:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:37.854 12:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:37.854 12:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1768544 /var/tmp/bdevperf.sock 00:25:37.854 12:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1768544 ']' 00:25:37.854 12:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:37.854 12:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:37.854 12:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:37.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:37.854 12:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:37.854 12:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:38.116 12:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:38.116 12:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:38.116 12:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:38.377 12:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:38.638 Nvme0n1 00:25:38.898 12:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:39.159 Nvme0n1 00:25:39.159 12:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:39.159 12:30:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:41.072 12:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:41.072 12:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:41.333 12:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:41.333 12:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:42.718 12:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:42.718 12:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:42.718 12:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.718 12:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:42.718 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.718 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:42.718 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.718 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:42.718 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:42.718 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:42.718 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.718 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:42.990 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.990 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:42.990 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.990 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:43.251 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.251 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:43.251 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.251 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:43.511 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.511 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:43.511 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.511 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:43.511 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.511 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:43.511 12:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:43.772 12:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:44.032 12:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:44.972 12:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:44.972 12:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:44.972 12:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.972 12:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:44.972 12:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:44.972 12:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:45.243 12:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.243 12:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:45.243 12:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.243 12:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:45.243 12:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.243 12:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:45.512 12:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.512 12:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:45.512 12:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.512 12:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:45.773 12:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.773 12:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:45.773 12:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.773 12:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:45.773 12:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.773 12:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:45.773 12:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.773 12:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:46.033 12:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.033 12:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:46.033 12:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:46.293 12:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:46.293 12:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:47.673 12:30:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:47.673 12:30:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:47.673 12:30:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.673 12:30:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:47.673 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.673 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:47.673 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.673 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:47.673 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:47.673 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:47.673 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.673 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:47.933 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.933 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:47.933 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.933 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:48.193 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.193 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:48.193 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.193 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:48.193 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.193 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:48.193 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.193 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:48.454 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.454 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:48.454 12:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:48.715 12:30:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:48.715 12:30:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:50.100 12:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:50.100 12:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:50.100 12:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.100 12:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:50.100 12:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.100 12:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:50.100 12:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.100 12:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:50.100 12:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:50.100 12:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:50.100 12:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.100 12:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:50.362 12:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.362 12:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:50.362 12:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.362 12:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:50.623 12:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.623 12:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:50.623 12:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.623 12:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:50.884 12:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.884 12:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:50.884 12:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.884 12:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:50.884 12:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:50.884 12:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:50.885 12:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:51.145 12:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:51.405 12:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:52.348 12:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:52.348 12:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:52.348 12:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.348 12:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:52.348 12:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:52.348 12:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:52.608 12:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.608 12:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:52.608 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:52.608 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:52.608 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.608 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:52.867 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.867 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:52.867 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.867 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:53.127 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.127 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:53.127 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.127 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:53.127 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.127 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:53.127 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.127 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:53.388 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.388 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:53.388 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:53.648 12:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:53.648 12:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:54.588 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:54.588 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:54.588 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.588 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:54.848 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:54.848 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:54.848 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.848 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:55.226 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.226 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:55.226 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.226 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:55.226 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.226 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:55.226 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.226 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:55.538 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.538 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:55.538 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.538 12:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:55.538 12:30:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.538 12:30:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:55.538 12:30:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.538 12:30:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:55.817 12:30:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.817 12:30:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:56.076 12:30:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:56.076 12:30:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:56.076 12:30:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:56.336 12:30:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:57.276 12:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:57.276 12:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:57.276 12:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.276 12:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:57.537 12:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.537 12:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:57.537 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.537 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:57.796 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.796 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:57.796 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.796 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:58.056 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.056 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:58.056 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.056 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:58.056 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.056 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:58.056 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.056 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:58.316 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.316 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:58.316 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:58.316 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.576 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.576 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:58.576 12:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:58.576 12:30:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:58.836 12:30:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:59.775 12:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:59.775 12:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:59.775 12:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.775 12:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:00.035 12:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:00.035 12:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:00.035 12:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.035 12:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:00.295 12:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.295 12:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:00.295 12:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.295 12:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:00.295 12:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.295 12:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:00.296 12:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.296 12:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:00.555 12:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.555 12:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:00.555 12:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.555 12:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:00.815 12:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.815 12:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:00.815 12:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.815 12:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:00.815 12:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.815 12:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:00.815 12:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:01.074 12:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:01.334 12:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:02.273 12:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:02.273 12:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:02.273 12:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.273 12:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:02.533 12:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.533 12:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:02.533 12:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.533 12:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:02.793 12:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.793 12:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:02.793 12:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.793 12:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:02.793 12:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.793 12:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:02.793 12:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.793 12:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:03.052 12:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.052 12:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:03.052 12:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.052 12:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:03.311 12:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.311 12:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:03.311 12:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.311 12:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:03.311 12:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.311 12:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:03.311 12:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:03.571 12:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:03.830 12:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:04.771 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:04.771 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:04.772 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.772 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.032 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.032 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:05.032 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.032 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.299 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.299 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.299 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.299 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.299 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.299 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:05.299 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.299 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:05.564 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.564 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:05.564 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.564 12:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:05.824 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.824 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:05.824 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.824 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:05.824 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.824 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1768544 00:26:05.824 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1768544 ']' 00:26:05.824 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1768544 00:26:05.824 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:05.824 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:05.824 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1768544 00:26:06.100 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:06.100 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:06.100 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1768544' 00:26:06.100 killing process with pid 1768544 00:26:06.100 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1768544 00:26:06.100 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1768544 00:26:06.100 { 00:26:06.100 "results": [ 00:26:06.100 { 00:26:06.100 "job": "Nvme0n1", 00:26:06.100 "core_mask": "0x4", 00:26:06.100 "workload": "verify", 00:26:06.100 "status": "terminated", 00:26:06.100 "verify_range": { 00:26:06.100 "start": 0, 00:26:06.100 "length": 16384 00:26:06.100 }, 00:26:06.100 "queue_depth": 128, 00:26:06.100 "io_size": 4096, 00:26:06.100 "runtime": 26.768357, 00:26:06.100 "iops": 10808.47061326924, 00:26:06.100 "mibps": 42.22058833308297, 00:26:06.100 "io_failed": 0, 00:26:06.100 "io_timeout": 0, 00:26:06.100 "avg_latency_us": 11823.620783916587, 00:26:06.100 "min_latency_us": 278.18666666666667, 00:26:06.100 "max_latency_us": 3075822.933333333 00:26:06.101 } 00:26:06.101 ], 00:26:06.101 "core_count": 1 00:26:06.101 } 00:26:06.101 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1768544 00:26:06.101 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:06.101 [2024-11-04 12:30:12.433037] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:26:06.101 [2024-11-04 12:30:12.433095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1768544 ] 00:26:06.101 [2024-11-04 12:30:12.484118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.101 [2024-11-04 12:30:12.512884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.101 Running I/O for 90 seconds... 00:26:06.101 9582.00 IOPS, 37.43 MiB/s [2024-11-04T11:30:40.671Z] 9638.50 IOPS, 37.65 MiB/s [2024-11-04T11:30:40.671Z] 9660.00 IOPS, 37.73 MiB/s [2024-11-04T11:30:40.671Z] 9661.50 IOPS, 37.74 MiB/s [2024-11-04T11:30:40.671Z] 9915.40 IOPS, 38.73 MiB/s [2024-11-04T11:30:40.671Z] 10397.17 IOPS, 40.61 MiB/s [2024-11-04T11:30:40.671Z] 10728.00 IOPS, 41.91 MiB/s [2024-11-04T11:30:40.671Z] 10739.38 IOPS, 41.95 MiB/s [2024-11-04T11:30:40.671Z] 10620.44 IOPS, 41.49 MiB/s [2024-11-04T11:30:40.671Z] 10530.40 IOPS, 41.13 MiB/s [2024-11-04T11:30:40.671Z] 10453.64 IOPS, 40.83 MiB/s [2024-11-04T11:30:40.671Z] [2024-11-04 12:30:25.522792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.101 [2024-11-04 12:30:25.522835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.522856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.522863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.522874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.522880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.522891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.522897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.522907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.522912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.522922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.522928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.522938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.522944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.522955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.522961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:06.101 [2024-11-04 12:30:25.524386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.101 [2024-11-04 12:30:25.524391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.524402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.524407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.524417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.524423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.524433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.524438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.524449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.524454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.524466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.524471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.524481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.524487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.524497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.524502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.524513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.524518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.524528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.524533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.524544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.524549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.524559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.524565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.524576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.524581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.524591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.524596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.524607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.524612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.524622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.102 [2024-11-04 12:30:25.524628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.102 [2024-11-04 12:30:25.525107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:06.102 [2024-11-04 12:30:25.525426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.102 [2024-11-04 12:30:25.525431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.525441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.525446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.525457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.525462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.525472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.525478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.525488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.525494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.525504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.525509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.525520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.525526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.525536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.525541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.525551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.525556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.525567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.525572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.525582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.525587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.525597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.525604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.525614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.525619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.525630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.525635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.525646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.525651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.525661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.525667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.525677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.525682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.525693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.525698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.525712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.525718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.525728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.525734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.526064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.526081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.526097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.526112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.526128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.526144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.526160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.526176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.526192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.526207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.526225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.526241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.526257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.526272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.103 [2024-11-04 12:30:25.526288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.526303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.526318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.526334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.103 [2024-11-04 12:30:25.526350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:06.103 [2024-11-04 12:30:25.526360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.526884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.526889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.527023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.527030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.527041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.527046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.527056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.527061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.527072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.527077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.527087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.527092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.527102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.527107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.527119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.527125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.527135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.527140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.527216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.527223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.527233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.527238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.527249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.527254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.527264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.527269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.527279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.527284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.527295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.527300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.527310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.527315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.527326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.527331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.527415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.527422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.104 [2024-11-04 12:30:25.527433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.104 [2024-11-04 12:30:25.527438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.527448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.527455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.527465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.527470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.527480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.527485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.527495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.527500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.527510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.527515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.527525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.527530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.527711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.527718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.527728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.527734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.527744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.527753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.527764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.527769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.527779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.527784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.527795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.527800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.527810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.527817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.527827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.527832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.528017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.528034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.528049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.528064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.528080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.528095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.528110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.528125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.528345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.528361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.528377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.528394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.528409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.528424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.105 [2024-11-04 12:30:25.528440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.105 [2024-11-04 12:30:25.528455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.528470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.528486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.105 [2024-11-04 12:30:25.528684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:06.105 [2024-11-04 12:30:25.528694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.528700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.528710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.528715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.528725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.528730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.528741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.528750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.528762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.528767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.528777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.528782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.528793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.528798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.528964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.528972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.528983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.528989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.528999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.529004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.536941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.536962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.536976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.536983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.536994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.106 [2024-11-04 12:30:25.537546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.106 [2024-11-04 12:30:25.537551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.107 [2024-11-04 12:30:25.537761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.537982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.537993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.538000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.538011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.538017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.538027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.538032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.538042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.538047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.538058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.538063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.538074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.538079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.538089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.538095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.538105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.538111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.538121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.538127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.538138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.538144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.538154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.538159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.538169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.538175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.107 [2024-11-04 12:30:25.538186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.107 [2024-11-04 12:30:25.538193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.108 [2024-11-04 12:30:25.538820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:06.108 [2024-11-04 12:30:25.538831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.538836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.538847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.538853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.538863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.538868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.538878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.538884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.538895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.538900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.538911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.538917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.538927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.109 [2024-11-04 12:30:25.538932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.538943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.109 [2024-11-04 12:30:25.538948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.538959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.538965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.538975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.538982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.538993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.538998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.539008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.539014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.539024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.539030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.539040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.539045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.539056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.539061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.539071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.539076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.539087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.539093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.540015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.540029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.540041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.540047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.540058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.540063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.540073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.540079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.540089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.540095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.540109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.540114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.540124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.540129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.540140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.540145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.540938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.540947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.540958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.540963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.540973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.540979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.540990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.540995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.541006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.541011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.541022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.541028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.541038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.541043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.541053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.541059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.541070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.541075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:06.109 [2024-11-04 12:30:25.541088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.109 [2024-11-04 12:30:25.541093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.110 [2024-11-04 12:30:25.541668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.541699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.541710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.549355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.549390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.549397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.549408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.549414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.549425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.549431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.549441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.549446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.549456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.549463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.549473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.549478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.549492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.549498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.549510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.549515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:06.110 [2024-11-04 12:30:25.549828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.110 [2024-11-04 12:30:25.549839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.549852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.549858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.549869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.549874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.549884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.549890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.549901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.549906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.549917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.549922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.549933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.549938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.549948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.549954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.549966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.549971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.549981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.549987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:06.111 [2024-11-04 12:30:25.550463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.111 [2024-11-04 12:30:25.550468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.112 [2024-11-04 12:30:25.550822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.112 [2024-11-04 12:30:25.550839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.550992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.550998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.551008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.551014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.551024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.551029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.551040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.551046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.551056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.551061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.551072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.551078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:06.112 [2024-11-04 12:30:25.551089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.112 [2024-11-04 12:30:25.551094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.551105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.551109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.551120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.551125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.551136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.551142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.551152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.551158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.551168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.551176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.551187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.551191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.551202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.551208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.551219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.551224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.551234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.551239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.551250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.551255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.551265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.551271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.551281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.551287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.551297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.551303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.551314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.551320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.551330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.551334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.551345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.551351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.551362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.551367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:06.113 [2024-11-04 12:30:25.552414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.113 [2024-11-04 12:30:25.552420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.552430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.114 [2024-11-04 12:30:25.552435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.552446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.552451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.552462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.552467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.552477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.552483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.552493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.552500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.552510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.552517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.552528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.552533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.552543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.552548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.552558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.552564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.552575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.552580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.552590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.552596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.553999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:06.114 [2024-11-04 12:30:25.554522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.114 [2024-11-04 12:30:25.554527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.554886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.554897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.562536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.562571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.562579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.562590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.562597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.562607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.562612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.562622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.562628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.562639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.562644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.562655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.562660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.562671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.562677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.562687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.562696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.562706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.562712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.562723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.562728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.562738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.562744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.562760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.562765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.562775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.562780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.562791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.562796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.562807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.115 [2024-11-04 12:30:25.562813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:06.115 [2024-11-04 12:30:25.562823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.115 [2024-11-04 12:30:25.562829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.562840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.116 [2024-11-04 12:30:25.562846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.562856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.562861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.562872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.562877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.562887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.562892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.562904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.562910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.562921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.562925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.562936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.562942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.562952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.562957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.562969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.562974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.562986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.562992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.116 [2024-11-04 12:30:25.563927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:06.116 [2024-11-04 12:30:25.563937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.563943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.563953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.563959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.563970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.563977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.563987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.563993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.117 [2024-11-04 12:30:25.564201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:06.117 [2024-11-04 12:30:25.564564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.117 [2024-11-04 12:30:25.564572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.564583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.564588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.564599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.564605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.564615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.564621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.118 [2024-11-04 12:30:25.565685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:06.118 [2024-11-04 12:30:25.565696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.565701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.565712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.565718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.565731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.565736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-11-04 12:30:25.566227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.119 [2024-11-04 12:30:25.566242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.566452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.566457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.574338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.574359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.574373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.574381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.574392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.574398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.574409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.574414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.574426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.574432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.574444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.574449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.574459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.574465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.574475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.574481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.574491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.574497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:06.119 [2024-11-04 12:30:25.574510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.119 [2024-11-04 12:30:25.574517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.574527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.574532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.574543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.574550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.574909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.574921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.574934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.574941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.574951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.574957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.574967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.574973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.574983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.574989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.574999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.120 [2024-11-04 12:30:25.575376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:06.120 [2024-11-04 12:30:25.575500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.120 [2024-11-04 12:30:25.575505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.575991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.575996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.576007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.576012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.576023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.576028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.576039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.576044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.576054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.576062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.576073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.576078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.576089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.576094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.576105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.576110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.576121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.576126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.576137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.121 [2024-11-04 12:30:25.576142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:06.121 [2024-11-04 12:30:25.576153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.576159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.576169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.576176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.576187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.576192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.576202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.576208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.576219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.576224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.576234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.576240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.576250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.576255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.576267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.576273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.576284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.576289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.576299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.576305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.576316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.576321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.576332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.576338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-11-04 12:30:25.577256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.122 [2024-11-04 12:30:25.577272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:06.122 [2024-11-04 12:30:25.577483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.122 [2024-11-04 12:30:25.577489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.577500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.577505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.577516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.577522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.577533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.577539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.577549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.577556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.577566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.577572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.577583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.577589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.577600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.577605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.577615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.577621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.577632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.577638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.577649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.577655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.577666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.577671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:06.123 [2024-11-04 12:30:25.578569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.123 [2024-11-04 12:30:25.578575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.578586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.578591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.124 [2024-11-04 12:30:25.586466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.586987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.586999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.587005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.587019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.587024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.587035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.587041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.587053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.587058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.587070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.587076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.587088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.587093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.587104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.587110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.587122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.587127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.587138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.587144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:06.124 [2024-11-04 12:30:25.587156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.124 [2024-11-04 12:30:25.587161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.125 [2024-11-04 12:30:25.587823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:06.125 [2024-11-04 12:30:25.587834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.587839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.587851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.126 [2024-11-04 12:30:25.587857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.587870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.126 [2024-11-04 12:30:25.587875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.587887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.587892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.587904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.587909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.587920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.587926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.587938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.587943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.587954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.587960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.587972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.587978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.587990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.587995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.126 [2024-11-04 12:30:25.588499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:06.126 [2024-11-04 12:30:25.588510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.127 [2024-11-04 12:30:25.588515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:25.588528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.127 [2024-11-04 12:30:25.588534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:25.588546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.127 [2024-11-04 12:30:25.588551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:25.588563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.127 [2024-11-04 12:30:25.588569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:25.588728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.127 [2024-11-04 12:30:25.588736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.127 10315.42 IOPS, 40.29 MiB/s [2024-11-04T11:30:40.697Z] 9521.92 IOPS, 37.20 MiB/s [2024-11-04T11:30:40.697Z] 8841.79 IOPS, 34.54 MiB/s [2024-11-04T11:30:40.697Z] 8271.13 IOPS, 32.31 MiB/s [2024-11-04T11:30:40.697Z] 8560.62 IOPS, 33.44 MiB/s [2024-11-04T11:30:40.697Z] 8824.35 IOPS, 34.47 MiB/s [2024-11-04T11:30:40.697Z] 9262.83 IOPS, 36.18 MiB/s [2024-11-04T11:30:40.697Z] 9661.37 IOPS, 37.74 MiB/s [2024-11-04T11:30:40.697Z] 9949.40 IOPS, 38.86 MiB/s [2024-11-04T11:30:40.697Z] 10110.29 IOPS, 39.49 MiB/s [2024-11-04T11:30:40.697Z] 10240.50 IOPS, 40.00 MiB/s [2024-11-04T11:30:40.697Z] 10505.74 IOPS, 41.04 MiB/s [2024-11-04T11:30:40.697Z] 10770.38 IOPS, 42.07 MiB/s [2024-11-04T11:30:40.697Z] [2024-11-04 12:30:38.205841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.127 [2024-11-04 12:30:38.205879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:38.205907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.127 [2024-11-04 12:30:38.205914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:38.206565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.127 [2024-11-04 12:30:38.206577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:38.206589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.127 [2024-11-04 12:30:38.206595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:38.206606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.127 [2024-11-04 12:30:38.206611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:38.206622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.127 [2024-11-04 12:30:38.206632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:38.206643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.127 [2024-11-04 12:30:38.206648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:38.206658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.127 [2024-11-04 12:30:38.206664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:38.206675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.127 [2024-11-04 12:30:38.206681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:38.206691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.127 [2024-11-04 12:30:38.206696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:38.206707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.127 [2024-11-04 12:30:38.206713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:38.206723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.127 [2024-11-04 12:30:38.206728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:38.206738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.127 [2024-11-04 12:30:38.206745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:38.206760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.127 [2024-11-04 12:30:38.206766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:38.206776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.127 [2024-11-04 12:30:38.206782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:38.206792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.127 [2024-11-04 12:30:38.206797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:38.206808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.127 [2024-11-04 12:30:38.206813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:38.206824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.127 [2024-11-04 12:30:38.206834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:38.206844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.127 [2024-11-04 12:30:38.206850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:06.127 [2024-11-04 12:30:38.206861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.127 [2024-11-04 12:30:38.206866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:06.127 10887.64 IOPS, 42.53 MiB/s [2024-11-04T11:30:40.697Z] 10840.81 IOPS, 42.35 MiB/s [2024-11-04T11:30:40.697Z] Received shutdown signal, test time was about 26.768966 seconds 00:26:06.127 00:26:06.127 Latency(us) 00:26:06.127 [2024-11-04T11:30:40.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.127 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:06.127 Verification LBA range: start 0x0 length 0x4000 00:26:06.127 Nvme0n1 : 26.77 10808.47 42.22 0.00 0.00 11823.62 278.19 3075822.93 00:26:06.127 [2024-11-04T11:30:40.697Z] =================================================================================================================== 00:26:06.127 [2024-11-04T11:30:40.697Z] Total : 10808.47 42.22 0.00 0.00 11823.62 278.19 3075822.93 00:26:06.127 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:06.387 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:06.387 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:06.387 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:06.387 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:06.387 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:06.388 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:06.388 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:06.388 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:06.388 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:06.388 rmmod nvme_tcp 00:26:06.388 rmmod nvme_fabrics 00:26:06.388 rmmod nvme_keyring 00:26:06.388 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:06.388 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:06.388 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:06.388 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 1767915 ']' 00:26:06.388 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 1767915 00:26:06.388 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1767915 ']' 00:26:06.388 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1767915 00:26:06.388 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:06.388 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:06.388 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1767915 00:26:06.388 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:06.388 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:06.388 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1767915' 00:26:06.388 killing process with pid 1767915 00:26:06.388 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1767915 00:26:06.388 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1767915 00:26:06.648 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:06.648 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:06.648 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:06.648 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:06.648 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:26:06.648 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:06.648 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:26:06.648 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:06.648 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:06.648 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.648 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:06.648 12:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.558 12:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:08.558 00:26:08.558 real 0m39.844s 00:26:08.558 user 1m43.470s 00:26:08.558 sys 0m11.183s 00:26:08.558 12:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:08.558 12:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:08.558 ************************************ 00:26:08.558 END TEST nvmf_host_multipath_status 00:26:08.558 ************************************ 00:26:08.559 12:30:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:08.559 12:30:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:08.559 12:30:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:08.559 12:30:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.559 ************************************ 00:26:08.559 START TEST nvmf_discovery_remove_ifc 00:26:08.559 ************************************ 00:26:08.559 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:08.819 * Looking for test storage... 00:26:08.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.819 --rc genhtml_branch_coverage=1 00:26:08.819 --rc genhtml_function_coverage=1 00:26:08.819 --rc genhtml_legend=1 00:26:08.819 --rc geninfo_all_blocks=1 00:26:08.819 --rc geninfo_unexecuted_blocks=1 00:26:08.819 00:26:08.819 ' 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.819 --rc genhtml_branch_coverage=1 00:26:08.819 --rc genhtml_function_coverage=1 00:26:08.819 --rc genhtml_legend=1 00:26:08.819 --rc geninfo_all_blocks=1 00:26:08.819 --rc geninfo_unexecuted_blocks=1 00:26:08.819 00:26:08.819 ' 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.819 --rc genhtml_branch_coverage=1 00:26:08.819 --rc genhtml_function_coverage=1 00:26:08.819 --rc genhtml_legend=1 00:26:08.819 --rc geninfo_all_blocks=1 00:26:08.819 --rc geninfo_unexecuted_blocks=1 00:26:08.819 00:26:08.819 ' 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.819 --rc genhtml_branch_coverage=1 00:26:08.819 --rc genhtml_function_coverage=1 00:26:08.819 --rc genhtml_legend=1 00:26:08.819 --rc geninfo_all_blocks=1 00:26:08.819 --rc geninfo_unexecuted_blocks=1 00:26:08.819 00:26:08.819 ' 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.819 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:08.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:08.820 12:30:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:16.954 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:16.954 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:16.954 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:16.955 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:16.955 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:16.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:16.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:26:16.955 00:26:16.955 --- 10.0.0.2 ping statistics --- 00:26:16.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.955 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:16.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:16.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:26:16.955 00:26:16.955 --- 10.0.0.1 ping statistics --- 00:26:16.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.955 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=1778287 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 1778287 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1778287 ']' 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:16.955 12:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:16.955 [2024-11-04 12:30:50.717792] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:26:16.955 [2024-11-04 12:30:50.717865] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.955 [2024-11-04 12:30:50.805611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.955 [2024-11-04 12:30:50.856419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:16.955 [2024-11-04 12:30:50.856468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:16.955 [2024-11-04 12:30:50.856476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:16.955 [2024-11-04 12:30:50.856489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:16.955 [2024-11-04 12:30:50.856495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:16.955 [2024-11-04 12:30:50.857241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.216 12:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:17.216 12:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:17.216 12:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:17.216 12:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:17.216 12:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:17.216 12:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:17.216 12:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:17.216 12:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.216 12:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:17.216 [2024-11-04 12:30:51.603459] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.216 [2024-11-04 12:30:51.611680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:17.216 null0 00:26:17.216 [2024-11-04 12:30:51.643653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.216 12:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.216 12:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1778628 00:26:17.216 12:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:17.216 12:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1778628 /tmp/host.sock 00:26:17.216 12:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1778628 ']' 00:26:17.216 12:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:17.216 12:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:17.216 12:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:17.216 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:17.216 12:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:17.216 12:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:17.216 [2024-11-04 12:30:51.724715] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:26:17.216 [2024-11-04 12:30:51.724810] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1778628 ] 00:26:17.477 [2024-11-04 12:30:51.790339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.477 [2024-11-04 12:30:51.833782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.048 12:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:18.048 12:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:18.048 12:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:18.048 12:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:18.048 12:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.048 12:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.048 12:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.048 12:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:18.048 12:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.048 12:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.048 12:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.048 12:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:18.048 12:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.048 12:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.432 [2024-11-04 12:30:53.622812] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:19.432 [2024-11-04 12:30:53.622833] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:19.432 [2024-11-04 12:30:53.622846] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:19.432 [2024-11-04 12:30:53.749288] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:19.432 [2024-11-04 12:30:53.853540] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:19.433 [2024-11-04 12:30:53.853590] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:19.433 [2024-11-04 12:30:53.853611] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:19.433 [2024-11-04 12:30:53.853625] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:19.433 [2024-11-04 12:30:53.853645] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:19.433 12:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.433 12:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:19.433 12:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:19.433 12:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.433 [2024-11-04 12:30:53.860934] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xa32450 was disconnected and freed. delete nvme_qpair. 00:26:19.433 12:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:19.433 12:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.433 12:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:19.433 12:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.433 12:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:19.433 12:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.433 12:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:19.433 12:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:19.433 12:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:19.693 12:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:19.693 12:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:19.693 12:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.693 12:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:19.693 12:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.693 12:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:19.693 12:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.693 12:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:19.693 12:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.693 12:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:19.693 12:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:20.633 12:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:20.633 12:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.633 12:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:20.633 12:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.633 12:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:20.633 12:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.633 12:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:20.633 12:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.633 12:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:20.633 12:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:21.573 12:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.573 12:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.573 12:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.573 12:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.573 12:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.573 12:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.573 12:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.833 12:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.833 12:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:21.833 12:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:22.774 12:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:22.774 12:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.774 12:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:22.774 12:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.774 12:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:22.774 12:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:22.774 12:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:22.774 12:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.774 12:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:22.774 12:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:23.715 12:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:23.715 12:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.715 12:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:23.715 12:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.715 12:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:23.715 12:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:23.715 12:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:23.715 12:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.715 12:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:23.715 12:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:25.097 12:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:25.098 12:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.098 12:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:25.098 12:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.098 12:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:25.098 12:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.098 12:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:25.098 [2024-11-04 12:30:59.294482] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:25.098 [2024-11-04 12:30:59.294522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.098 [2024-11-04 12:30:59.294534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-04 12:30:59.294544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.098 [2024-11-04 12:30:59.294552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-04 12:30:59.294560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.098 [2024-11-04 12:30:59.294567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-04 12:30:59.294576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.098 [2024-11-04 12:30:59.294583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-04 12:30:59.294592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.098 [2024-11-04 12:30:59.294603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.098 [2024-11-04 12:30:59.294611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ecc0 is same with the state(6) to be set 00:26:25.098 12:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.098 [2024-11-04 12:30:59.304504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0ecc0 (9): Bad file descriptor 00:26:25.098 [2024-11-04 12:30:59.314550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:25.098 12:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:25.098 12:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:26.038 12:31:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:26.038 12:31:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.038 12:31:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:26.038 12:31:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.038 12:31:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:26.038 12:31:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.038 [2024-11-04 12:31:00.340804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:26.038 [2024-11-04 12:31:00.340860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0ecc0 with addr=10.0.0.2, port=4420 00:26:26.038 [2024-11-04 12:31:00.340875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ecc0 is same with the state(6) to be set 00:26:26.038 12:31:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:26.038 [2024-11-04 12:31:00.340909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0ecc0 (9): Bad file descriptor 00:26:26.038 [2024-11-04 12:31:00.340960] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:26.038 [2024-11-04 12:31:00.340983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:26.038 [2024-11-04 12:31:00.340993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:26.038 [2024-11-04 12:31:00.341003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:26.038 [2024-11-04 12:31:00.341022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.038 [2024-11-04 12:31:00.341031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:26.038 12:31:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.038 12:31:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:26.038 12:31:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:26.979 [2024-11-04 12:31:01.343413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:26.979 [2024-11-04 12:31:01.343435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:26.979 [2024-11-04 12:31:01.343443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:26.979 [2024-11-04 12:31:01.343451] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:26.979 [2024-11-04 12:31:01.343466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.979 [2024-11-04 12:31:01.343487] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:26.979 [2024-11-04 12:31:01.343515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.979 [2024-11-04 12:31:01.343526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.979 [2024-11-04 12:31:01.343537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.979 [2024-11-04 12:31:01.343544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.979 [2024-11-04 12:31:01.343553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.979 [2024-11-04 12:31:01.343560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.979 [2024-11-04 12:31:01.343569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.979 [2024-11-04 12:31:01.343577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.979 [2024-11-04 12:31:01.343585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.979 [2024-11-04 12:31:01.343593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.979 [2024-11-04 12:31:01.343600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:26.979 [2024-11-04 12:31:01.343654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fe400 (9): Bad file descriptor 00:26:26.979 [2024-11-04 12:31:01.344673] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:26.979 [2024-11-04 12:31:01.344685] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:26.979 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:26.979 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.979 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:26.980 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.980 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:26.980 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.980 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:26.980 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.980 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:26.980 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:26.980 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:26.980 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:26.980 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:26.980 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.980 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:26.980 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.980 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:26.980 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.980 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:26.980 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.240 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:27.240 12:31:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:28.183 12:31:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:28.183 12:31:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:28.183 12:31:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.183 12:31:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:28.183 12:31:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.183 12:31:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:28.183 12:31:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.183 12:31:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.183 12:31:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:28.183 12:31:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:29.129 [2024-11-04 12:31:03.396941] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:29.129 [2024-11-04 12:31:03.396958] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:29.129 [2024-11-04 12:31:03.396972] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:29.129 [2024-11-04 12:31:03.485255] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:29.129 [2024-11-04 12:31:03.546923] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:29.129 [2024-11-04 12:31:03.546960] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:29.129 [2024-11-04 12:31:03.546980] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:29.129 [2024-11-04 12:31:03.546995] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:29.129 [2024-11-04 12:31:03.547003] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:29.129 [2024-11-04 12:31:03.555154] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xa0a450 was disconnected and freed. delete nvme_qpair. 00:26:29.129 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:29.129 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.129 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:29.129 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.129 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:29.129 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.129 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:29.129 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.129 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:29.129 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:29.129 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1778628 00:26:29.129 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1778628 ']' 00:26:29.129 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1778628 00:26:29.129 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1778628 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1778628' 00:26:29.390 killing process with pid 1778628 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1778628 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1778628 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:29.390 rmmod nvme_tcp 00:26:29.390 rmmod nvme_fabrics 00:26:29.390 rmmod nvme_keyring 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 1778287 ']' 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 1778287 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1778287 ']' 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1778287 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:29.390 12:31:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1778287 00:26:29.651 12:31:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:29.651 12:31:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:29.651 12:31:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1778287' 00:26:29.651 killing process with pid 1778287 00:26:29.651 12:31:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1778287 00:26:29.651 12:31:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1778287 00:26:29.651 12:31:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:29.651 12:31:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:29.651 12:31:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:29.651 12:31:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:29.651 12:31:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:26:29.651 12:31:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:26:29.651 12:31:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:29.651 12:31:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:29.651 12:31:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:29.651 12:31:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.651 12:31:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.651 12:31:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.202 12:31:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:32.202 00:26:32.202 real 0m23.069s 00:26:32.202 user 0m27.182s 00:26:32.202 sys 0m6.890s 00:26:32.202 12:31:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:32.202 12:31:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.202 ************************************ 00:26:32.202 END TEST nvmf_discovery_remove_ifc 00:26:32.202 ************************************ 00:26:32.202 12:31:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:32.202 12:31:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:32.202 12:31:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:32.202 12:31:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.202 ************************************ 00:26:32.202 START TEST nvmf_identify_kernel_target 00:26:32.202 ************************************ 00:26:32.202 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:32.202 * Looking for test storage... 00:26:32.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:32.202 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:32.202 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:26:32.202 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:32.202 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:32.202 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:32.202 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:32.202 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:32.202 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:32.202 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:32.202 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:32.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.203 --rc genhtml_branch_coverage=1 00:26:32.203 --rc genhtml_function_coverage=1 00:26:32.203 --rc genhtml_legend=1 00:26:32.203 --rc geninfo_all_blocks=1 00:26:32.203 --rc geninfo_unexecuted_blocks=1 00:26:32.203 00:26:32.203 ' 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:32.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.203 --rc genhtml_branch_coverage=1 00:26:32.203 --rc genhtml_function_coverage=1 00:26:32.203 --rc genhtml_legend=1 00:26:32.203 --rc geninfo_all_blocks=1 00:26:32.203 --rc geninfo_unexecuted_blocks=1 00:26:32.203 00:26:32.203 ' 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:32.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.203 --rc genhtml_branch_coverage=1 00:26:32.203 --rc genhtml_function_coverage=1 00:26:32.203 --rc genhtml_legend=1 00:26:32.203 --rc geninfo_all_blocks=1 00:26:32.203 --rc geninfo_unexecuted_blocks=1 00:26:32.203 00:26:32.203 ' 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:32.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.203 --rc genhtml_branch_coverage=1 00:26:32.203 --rc genhtml_function_coverage=1 00:26:32.203 --rc genhtml_legend=1 00:26:32.203 --rc geninfo_all_blocks=1 00:26:32.203 --rc geninfo_unexecuted_blocks=1 00:26:32.203 00:26:32.203 ' 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:32.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:32.203 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:32.204 12:31:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:40.349 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:40.350 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:40.350 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:40.350 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:40.350 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:40.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:26:40.350 00:26:40.350 --- 10.0.0.2 ping statistics --- 00:26:40.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.350 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:26:40.350 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:26:40.350 00:26:40.350 --- 10.0.0.1 ping statistics --- 00:26:40.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.351 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:40.351 12:31:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:42.899 Waiting for block devices as requested 00:26:42.899 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:42.899 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:42.899 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:42.899 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:42.899 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:42.899 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:43.160 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:43.160 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:43.160 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:43.419 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:43.419 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:43.679 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:43.679 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:43.679 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:43.679 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:43.940 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:43.940 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:44.201 No valid GPT data, bailing 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:44.201 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:26:44.463 00:26:44.463 Discovery Log Number of Records 2, Generation counter 2 00:26:44.463 =====Discovery Log Entry 0====== 00:26:44.463 trtype: tcp 00:26:44.463 adrfam: ipv4 00:26:44.463 subtype: current discovery subsystem 00:26:44.463 treq: not specified, sq flow control disable supported 00:26:44.463 portid: 1 00:26:44.463 trsvcid: 4420 00:26:44.463 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:44.463 traddr: 10.0.0.1 00:26:44.463 eflags: none 00:26:44.463 sectype: none 00:26:44.463 =====Discovery Log Entry 1====== 00:26:44.463 trtype: tcp 00:26:44.463 adrfam: ipv4 00:26:44.463 subtype: nvme subsystem 00:26:44.463 treq: not specified, sq flow control disable supported 00:26:44.463 portid: 1 00:26:44.463 trsvcid: 4420 00:26:44.463 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:44.463 traddr: 10.0.0.1 00:26:44.463 eflags: none 00:26:44.463 sectype: none 00:26:44.463 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:44.463 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:44.463 ===================================================== 00:26:44.463 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:44.463 ===================================================== 00:26:44.463 Controller Capabilities/Features 00:26:44.463 ================================ 00:26:44.463 Vendor ID: 0000 00:26:44.463 Subsystem Vendor ID: 0000 00:26:44.463 Serial Number: 8dcdb83e37be96ac15b3 00:26:44.463 Model Number: Linux 00:26:44.463 Firmware Version: 6.8.9-20 00:26:44.463 Recommended Arb Burst: 0 00:26:44.463 IEEE OUI Identifier: 00 00 00 00:26:44.463 Multi-path I/O 00:26:44.463 May have multiple subsystem ports: No 00:26:44.463 May have multiple controllers: No 00:26:44.463 Associated with SR-IOV VF: No 00:26:44.463 Max Data Transfer Size: Unlimited 00:26:44.463 Max Number of Namespaces: 0 00:26:44.463 Max Number of I/O Queues: 1024 00:26:44.463 NVMe Specification Version (VS): 1.3 00:26:44.463 NVMe Specification Version (Identify): 1.3 00:26:44.463 Maximum Queue Entries: 1024 00:26:44.463 Contiguous Queues Required: No 00:26:44.463 Arbitration Mechanisms Supported 00:26:44.463 Weighted Round Robin: Not Supported 00:26:44.463 Vendor Specific: Not Supported 00:26:44.463 Reset Timeout: 7500 ms 00:26:44.463 Doorbell Stride: 4 bytes 00:26:44.463 NVM Subsystem Reset: Not Supported 00:26:44.463 Command Sets Supported 00:26:44.463 NVM Command Set: Supported 00:26:44.463 Boot Partition: Not Supported 00:26:44.463 Memory Page Size Minimum: 4096 bytes 00:26:44.463 Memory Page Size Maximum: 4096 bytes 00:26:44.463 Persistent Memory Region: Not Supported 00:26:44.463 Optional Asynchronous Events Supported 00:26:44.463 Namespace Attribute Notices: Not Supported 00:26:44.463 Firmware Activation Notices: Not Supported 00:26:44.463 ANA Change Notices: Not Supported 00:26:44.463 PLE Aggregate Log Change Notices: Not Supported 00:26:44.463 LBA Status Info Alert Notices: Not Supported 00:26:44.463 EGE Aggregate Log Change Notices: Not Supported 00:26:44.463 Normal NVM Subsystem Shutdown event: Not Supported 00:26:44.463 Zone Descriptor Change Notices: Not Supported 00:26:44.463 Discovery Log Change Notices: Supported 00:26:44.463 Controller Attributes 00:26:44.463 128-bit Host Identifier: Not Supported 00:26:44.463 Non-Operational Permissive Mode: Not Supported 00:26:44.463 NVM Sets: Not Supported 00:26:44.463 Read Recovery Levels: Not Supported 00:26:44.463 Endurance Groups: Not Supported 00:26:44.463 Predictable Latency Mode: Not Supported 00:26:44.463 Traffic Based Keep ALive: Not Supported 00:26:44.463 Namespace Granularity: Not Supported 00:26:44.463 SQ Associations: Not Supported 00:26:44.463 UUID List: Not Supported 00:26:44.463 Multi-Domain Subsystem: Not Supported 00:26:44.463 Fixed Capacity Management: Not Supported 00:26:44.463 Variable Capacity Management: Not Supported 00:26:44.463 Delete Endurance Group: Not Supported 00:26:44.463 Delete NVM Set: Not Supported 00:26:44.463 Extended LBA Formats Supported: Not Supported 00:26:44.463 Flexible Data Placement Supported: Not Supported 00:26:44.463 00:26:44.463 Controller Memory Buffer Support 00:26:44.463 ================================ 00:26:44.463 Supported: No 00:26:44.463 00:26:44.463 Persistent Memory Region Support 00:26:44.463 ================================ 00:26:44.463 Supported: No 00:26:44.463 00:26:44.463 Admin Command Set Attributes 00:26:44.463 ============================ 00:26:44.463 Security Send/Receive: Not Supported 00:26:44.463 Format NVM: Not Supported 00:26:44.463 Firmware Activate/Download: Not Supported 00:26:44.463 Namespace Management: Not Supported 00:26:44.463 Device Self-Test: Not Supported 00:26:44.463 Directives: Not Supported 00:26:44.463 NVMe-MI: Not Supported 00:26:44.463 Virtualization Management: Not Supported 00:26:44.463 Doorbell Buffer Config: Not Supported 00:26:44.463 Get LBA Status Capability: Not Supported 00:26:44.463 Command & Feature Lockdown Capability: Not Supported 00:26:44.463 Abort Command Limit: 1 00:26:44.463 Async Event Request Limit: 1 00:26:44.463 Number of Firmware Slots: N/A 00:26:44.463 Firmware Slot 1 Read-Only: N/A 00:26:44.463 Firmware Activation Without Reset: N/A 00:26:44.463 Multiple Update Detection Support: N/A 00:26:44.463 Firmware Update Granularity: No Information Provided 00:26:44.463 Per-Namespace SMART Log: No 00:26:44.463 Asymmetric Namespace Access Log Page: Not Supported 00:26:44.463 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:44.463 Command Effects Log Page: Not Supported 00:26:44.463 Get Log Page Extended Data: Supported 00:26:44.463 Telemetry Log Pages: Not Supported 00:26:44.463 Persistent Event Log Pages: Not Supported 00:26:44.463 Supported Log Pages Log Page: May Support 00:26:44.463 Commands Supported & Effects Log Page: Not Supported 00:26:44.463 Feature Identifiers & Effects Log Page:May Support 00:26:44.463 NVMe-MI Commands & Effects Log Page: May Support 00:26:44.463 Data Area 4 for Telemetry Log: Not Supported 00:26:44.463 Error Log Page Entries Supported: 1 00:26:44.463 Keep Alive: Not Supported 00:26:44.463 00:26:44.463 NVM Command Set Attributes 00:26:44.463 ========================== 00:26:44.463 Submission Queue Entry Size 00:26:44.463 Max: 1 00:26:44.463 Min: 1 00:26:44.463 Completion Queue Entry Size 00:26:44.463 Max: 1 00:26:44.463 Min: 1 00:26:44.463 Number of Namespaces: 0 00:26:44.463 Compare Command: Not Supported 00:26:44.463 Write Uncorrectable Command: Not Supported 00:26:44.463 Dataset Management Command: Not Supported 00:26:44.463 Write Zeroes Command: Not Supported 00:26:44.463 Set Features Save Field: Not Supported 00:26:44.463 Reservations: Not Supported 00:26:44.463 Timestamp: Not Supported 00:26:44.463 Copy: Not Supported 00:26:44.463 Volatile Write Cache: Not Present 00:26:44.463 Atomic Write Unit (Normal): 1 00:26:44.463 Atomic Write Unit (PFail): 1 00:26:44.463 Atomic Compare & Write Unit: 1 00:26:44.463 Fused Compare & Write: Not Supported 00:26:44.463 Scatter-Gather List 00:26:44.463 SGL Command Set: Supported 00:26:44.463 SGL Keyed: Not Supported 00:26:44.463 SGL Bit Bucket Descriptor: Not Supported 00:26:44.463 SGL Metadata Pointer: Not Supported 00:26:44.463 Oversized SGL: Not Supported 00:26:44.463 SGL Metadata Address: Not Supported 00:26:44.463 SGL Offset: Supported 00:26:44.463 Transport SGL Data Block: Not Supported 00:26:44.463 Replay Protected Memory Block: Not Supported 00:26:44.463 00:26:44.463 Firmware Slot Information 00:26:44.463 ========================= 00:26:44.463 Active slot: 0 00:26:44.463 00:26:44.463 00:26:44.463 Error Log 00:26:44.463 ========= 00:26:44.463 00:26:44.463 Active Namespaces 00:26:44.463 ================= 00:26:44.463 Discovery Log Page 00:26:44.463 ================== 00:26:44.464 Generation Counter: 2 00:26:44.464 Number of Records: 2 00:26:44.464 Record Format: 0 00:26:44.464 00:26:44.464 Discovery Log Entry 0 00:26:44.464 ---------------------- 00:26:44.464 Transport Type: 3 (TCP) 00:26:44.464 Address Family: 1 (IPv4) 00:26:44.464 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:44.464 Entry Flags: 00:26:44.464 Duplicate Returned Information: 0 00:26:44.464 Explicit Persistent Connection Support for Discovery: 0 00:26:44.464 Transport Requirements: 00:26:44.464 Secure Channel: Not Specified 00:26:44.464 Port ID: 1 (0x0001) 00:26:44.464 Controller ID: 65535 (0xffff) 00:26:44.464 Admin Max SQ Size: 32 00:26:44.464 Transport Service Identifier: 4420 00:26:44.464 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:44.464 Transport Address: 10.0.0.1 00:26:44.464 Discovery Log Entry 1 00:26:44.464 ---------------------- 00:26:44.464 Transport Type: 3 (TCP) 00:26:44.464 Address Family: 1 (IPv4) 00:26:44.464 Subsystem Type: 2 (NVM Subsystem) 00:26:44.464 Entry Flags: 00:26:44.464 Duplicate Returned Information: 0 00:26:44.464 Explicit Persistent Connection Support for Discovery: 0 00:26:44.464 Transport Requirements: 00:26:44.464 Secure Channel: Not Specified 00:26:44.464 Port ID: 1 (0x0001) 00:26:44.464 Controller ID: 65535 (0xffff) 00:26:44.464 Admin Max SQ Size: 32 00:26:44.464 Transport Service Identifier: 4420 00:26:44.464 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:44.464 Transport Address: 10.0.0.1 00:26:44.464 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:44.464 get_feature(0x01) failed 00:26:44.464 get_feature(0x02) failed 00:26:44.464 get_feature(0x04) failed 00:26:44.464 ===================================================== 00:26:44.464 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:44.464 ===================================================== 00:26:44.464 Controller Capabilities/Features 00:26:44.464 ================================ 00:26:44.464 Vendor ID: 0000 00:26:44.464 Subsystem Vendor ID: 0000 00:26:44.464 Serial Number: 71566814d98de64fd1bb 00:26:44.464 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:44.464 Firmware Version: 6.8.9-20 00:26:44.464 Recommended Arb Burst: 6 00:26:44.464 IEEE OUI Identifier: 00 00 00 00:26:44.464 Multi-path I/O 00:26:44.464 May have multiple subsystem ports: Yes 00:26:44.464 May have multiple controllers: Yes 00:26:44.464 Associated with SR-IOV VF: No 00:26:44.464 Max Data Transfer Size: Unlimited 00:26:44.464 Max Number of Namespaces: 1024 00:26:44.464 Max Number of I/O Queues: 128 00:26:44.464 NVMe Specification Version (VS): 1.3 00:26:44.464 NVMe Specification Version (Identify): 1.3 00:26:44.464 Maximum Queue Entries: 1024 00:26:44.464 Contiguous Queues Required: No 00:26:44.464 Arbitration Mechanisms Supported 00:26:44.464 Weighted Round Robin: Not Supported 00:26:44.464 Vendor Specific: Not Supported 00:26:44.464 Reset Timeout: 7500 ms 00:26:44.464 Doorbell Stride: 4 bytes 00:26:44.464 NVM Subsystem Reset: Not Supported 00:26:44.464 Command Sets Supported 00:26:44.464 NVM Command Set: Supported 00:26:44.464 Boot Partition: Not Supported 00:26:44.464 Memory Page Size Minimum: 4096 bytes 00:26:44.464 Memory Page Size Maximum: 4096 bytes 00:26:44.464 Persistent Memory Region: Not Supported 00:26:44.464 Optional Asynchronous Events Supported 00:26:44.464 Namespace Attribute Notices: Supported 00:26:44.464 Firmware Activation Notices: Not Supported 00:26:44.464 ANA Change Notices: Supported 00:26:44.464 PLE Aggregate Log Change Notices: Not Supported 00:26:44.464 LBA Status Info Alert Notices: Not Supported 00:26:44.464 EGE Aggregate Log Change Notices: Not Supported 00:26:44.464 Normal NVM Subsystem Shutdown event: Not Supported 00:26:44.464 Zone Descriptor Change Notices: Not Supported 00:26:44.464 Discovery Log Change Notices: Not Supported 00:26:44.464 Controller Attributes 00:26:44.464 128-bit Host Identifier: Supported 00:26:44.464 Non-Operational Permissive Mode: Not Supported 00:26:44.464 NVM Sets: Not Supported 00:26:44.464 Read Recovery Levels: Not Supported 00:26:44.464 Endurance Groups: Not Supported 00:26:44.464 Predictable Latency Mode: Not Supported 00:26:44.464 Traffic Based Keep ALive: Supported 00:26:44.464 Namespace Granularity: Not Supported 00:26:44.464 SQ Associations: Not Supported 00:26:44.464 UUID List: Not Supported 00:26:44.464 Multi-Domain Subsystem: Not Supported 00:26:44.464 Fixed Capacity Management: Not Supported 00:26:44.464 Variable Capacity Management: Not Supported 00:26:44.464 Delete Endurance Group: Not Supported 00:26:44.464 Delete NVM Set: Not Supported 00:26:44.464 Extended LBA Formats Supported: Not Supported 00:26:44.464 Flexible Data Placement Supported: Not Supported 00:26:44.464 00:26:44.464 Controller Memory Buffer Support 00:26:44.464 ================================ 00:26:44.464 Supported: No 00:26:44.464 00:26:44.464 Persistent Memory Region Support 00:26:44.464 ================================ 00:26:44.464 Supported: No 00:26:44.464 00:26:44.464 Admin Command Set Attributes 00:26:44.464 ============================ 00:26:44.464 Security Send/Receive: Not Supported 00:26:44.464 Format NVM: Not Supported 00:26:44.464 Firmware Activate/Download: Not Supported 00:26:44.464 Namespace Management: Not Supported 00:26:44.464 Device Self-Test: Not Supported 00:26:44.464 Directives: Not Supported 00:26:44.464 NVMe-MI: Not Supported 00:26:44.464 Virtualization Management: Not Supported 00:26:44.464 Doorbell Buffer Config: Not Supported 00:26:44.464 Get LBA Status Capability: Not Supported 00:26:44.464 Command & Feature Lockdown Capability: Not Supported 00:26:44.464 Abort Command Limit: 4 00:26:44.464 Async Event Request Limit: 4 00:26:44.464 Number of Firmware Slots: N/A 00:26:44.464 Firmware Slot 1 Read-Only: N/A 00:26:44.464 Firmware Activation Without Reset: N/A 00:26:44.464 Multiple Update Detection Support: N/A 00:26:44.464 Firmware Update Granularity: No Information Provided 00:26:44.464 Per-Namespace SMART Log: Yes 00:26:44.464 Asymmetric Namespace Access Log Page: Supported 00:26:44.464 ANA Transition Time : 10 sec 00:26:44.464 00:26:44.464 Asymmetric Namespace Access Capabilities 00:26:44.464 ANA Optimized State : Supported 00:26:44.464 ANA Non-Optimized State : Supported 00:26:44.464 ANA Inaccessible State : Supported 00:26:44.464 ANA Persistent Loss State : Supported 00:26:44.464 ANA Change State : Supported 00:26:44.464 ANAGRPID is not changed : No 00:26:44.464 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:44.464 00:26:44.464 ANA Group Identifier Maximum : 128 00:26:44.464 Number of ANA Group Identifiers : 128 00:26:44.464 Max Number of Allowed Namespaces : 1024 00:26:44.464 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:44.464 Command Effects Log Page: Supported 00:26:44.464 Get Log Page Extended Data: Supported 00:26:44.464 Telemetry Log Pages: Not Supported 00:26:44.464 Persistent Event Log Pages: Not Supported 00:26:44.464 Supported Log Pages Log Page: May Support 00:26:44.464 Commands Supported & Effects Log Page: Not Supported 00:26:44.464 Feature Identifiers & Effects Log Page:May Support 00:26:44.464 NVMe-MI Commands & Effects Log Page: May Support 00:26:44.464 Data Area 4 for Telemetry Log: Not Supported 00:26:44.464 Error Log Page Entries Supported: 128 00:26:44.464 Keep Alive: Supported 00:26:44.464 Keep Alive Granularity: 1000 ms 00:26:44.464 00:26:44.464 NVM Command Set Attributes 00:26:44.464 ========================== 00:26:44.464 Submission Queue Entry Size 00:26:44.464 Max: 64 00:26:44.464 Min: 64 00:26:44.464 Completion Queue Entry Size 00:26:44.464 Max: 16 00:26:44.464 Min: 16 00:26:44.464 Number of Namespaces: 1024 00:26:44.464 Compare Command: Not Supported 00:26:44.464 Write Uncorrectable Command: Not Supported 00:26:44.464 Dataset Management Command: Supported 00:26:44.464 Write Zeroes Command: Supported 00:26:44.464 Set Features Save Field: Not Supported 00:26:44.464 Reservations: Not Supported 00:26:44.464 Timestamp: Not Supported 00:26:44.464 Copy: Not Supported 00:26:44.464 Volatile Write Cache: Present 00:26:44.464 Atomic Write Unit (Normal): 1 00:26:44.464 Atomic Write Unit (PFail): 1 00:26:44.464 Atomic Compare & Write Unit: 1 00:26:44.464 Fused Compare & Write: Not Supported 00:26:44.464 Scatter-Gather List 00:26:44.464 SGL Command Set: Supported 00:26:44.464 SGL Keyed: Not Supported 00:26:44.464 SGL Bit Bucket Descriptor: Not Supported 00:26:44.464 SGL Metadata Pointer: Not Supported 00:26:44.464 Oversized SGL: Not Supported 00:26:44.464 SGL Metadata Address: Not Supported 00:26:44.464 SGL Offset: Supported 00:26:44.464 Transport SGL Data Block: Not Supported 00:26:44.464 Replay Protected Memory Block: Not Supported 00:26:44.464 00:26:44.464 Firmware Slot Information 00:26:44.464 ========================= 00:26:44.465 Active slot: 0 00:26:44.465 00:26:44.465 Asymmetric Namespace Access 00:26:44.465 =========================== 00:26:44.465 Change Count : 0 00:26:44.465 Number of ANA Group Descriptors : 1 00:26:44.465 ANA Group Descriptor : 0 00:26:44.465 ANA Group ID : 1 00:26:44.465 Number of NSID Values : 1 00:26:44.465 Change Count : 0 00:26:44.465 ANA State : 1 00:26:44.465 Namespace Identifier : 1 00:26:44.465 00:26:44.465 Commands Supported and Effects 00:26:44.465 ============================== 00:26:44.465 Admin Commands 00:26:44.465 -------------- 00:26:44.465 Get Log Page (02h): Supported 00:26:44.465 Identify (06h): Supported 00:26:44.465 Abort (08h): Supported 00:26:44.465 Set Features (09h): Supported 00:26:44.465 Get Features (0Ah): Supported 00:26:44.465 Asynchronous Event Request (0Ch): Supported 00:26:44.465 Keep Alive (18h): Supported 00:26:44.465 I/O Commands 00:26:44.465 ------------ 00:26:44.465 Flush (00h): Supported 00:26:44.465 Write (01h): Supported LBA-Change 00:26:44.465 Read (02h): Supported 00:26:44.465 Write Zeroes (08h): Supported LBA-Change 00:26:44.465 Dataset Management (09h): Supported 00:26:44.465 00:26:44.465 Error Log 00:26:44.465 ========= 00:26:44.465 Entry: 0 00:26:44.465 Error Count: 0x3 00:26:44.465 Submission Queue Id: 0x0 00:26:44.465 Command Id: 0x5 00:26:44.465 Phase Bit: 0 00:26:44.465 Status Code: 0x2 00:26:44.465 Status Code Type: 0x0 00:26:44.465 Do Not Retry: 1 00:26:44.465 Error Location: 0x28 00:26:44.465 LBA: 0x0 00:26:44.465 Namespace: 0x0 00:26:44.465 Vendor Log Page: 0x0 00:26:44.465 ----------- 00:26:44.465 Entry: 1 00:26:44.465 Error Count: 0x2 00:26:44.465 Submission Queue Id: 0x0 00:26:44.465 Command Id: 0x5 00:26:44.465 Phase Bit: 0 00:26:44.465 Status Code: 0x2 00:26:44.465 Status Code Type: 0x0 00:26:44.465 Do Not Retry: 1 00:26:44.465 Error Location: 0x28 00:26:44.465 LBA: 0x0 00:26:44.465 Namespace: 0x0 00:26:44.465 Vendor Log Page: 0x0 00:26:44.465 ----------- 00:26:44.465 Entry: 2 00:26:44.465 Error Count: 0x1 00:26:44.465 Submission Queue Id: 0x0 00:26:44.465 Command Id: 0x4 00:26:44.465 Phase Bit: 0 00:26:44.465 Status Code: 0x2 00:26:44.465 Status Code Type: 0x0 00:26:44.465 Do Not Retry: 1 00:26:44.465 Error Location: 0x28 00:26:44.465 LBA: 0x0 00:26:44.465 Namespace: 0x0 00:26:44.465 Vendor Log Page: 0x0 00:26:44.465 00:26:44.465 Number of Queues 00:26:44.465 ================ 00:26:44.465 Number of I/O Submission Queues: 128 00:26:44.465 Number of I/O Completion Queues: 128 00:26:44.465 00:26:44.465 ZNS Specific Controller Data 00:26:44.465 ============================ 00:26:44.465 Zone Append Size Limit: 0 00:26:44.465 00:26:44.465 00:26:44.465 Active Namespaces 00:26:44.465 ================= 00:26:44.465 get_feature(0x05) failed 00:26:44.465 Namespace ID:1 00:26:44.465 Command Set Identifier: NVM (00h) 00:26:44.465 Deallocate: Supported 00:26:44.465 Deallocated/Unwritten Error: Not Supported 00:26:44.465 Deallocated Read Value: Unknown 00:26:44.465 Deallocate in Write Zeroes: Not Supported 00:26:44.465 Deallocated Guard Field: 0xFFFF 00:26:44.465 Flush: Supported 00:26:44.465 Reservation: Not Supported 00:26:44.465 Namespace Sharing Capabilities: Multiple Controllers 00:26:44.465 Size (in LBAs): 3750748848 (1788GiB) 00:26:44.465 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:44.465 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:44.465 UUID: d2185e2a-88d7-479a-8b78-58342b3754a6 00:26:44.465 Thin Provisioning: Not Supported 00:26:44.465 Per-NS Atomic Units: Yes 00:26:44.465 Atomic Write Unit (Normal): 8 00:26:44.465 Atomic Write Unit (PFail): 8 00:26:44.465 Preferred Write Granularity: 8 00:26:44.465 Atomic Compare & Write Unit: 8 00:26:44.465 Atomic Boundary Size (Normal): 0 00:26:44.465 Atomic Boundary Size (PFail): 0 00:26:44.465 Atomic Boundary Offset: 0 00:26:44.465 NGUID/EUI64 Never Reused: No 00:26:44.465 ANA group ID: 1 00:26:44.465 Namespace Write Protected: No 00:26:44.465 Number of LBA Formats: 1 00:26:44.465 Current LBA Format: LBA Format #00 00:26:44.465 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:44.465 00:26:44.465 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:44.465 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:44.465 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:44.465 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:44.465 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:44.465 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:44.465 12:31:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:44.465 rmmod nvme_tcp 00:26:44.465 rmmod nvme_fabrics 00:26:44.726 12:31:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:44.726 12:31:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:44.726 12:31:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:44.726 12:31:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:26:44.726 12:31:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:44.726 12:31:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:44.726 12:31:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:44.726 12:31:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:44.726 12:31:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:44.726 12:31:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:26:44.726 12:31:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:26:44.726 12:31:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:44.726 12:31:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:44.726 12:31:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.726 12:31:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.726 12:31:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.740 12:31:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:46.740 12:31:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:46.740 12:31:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:46.740 12:31:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:26:46.740 12:31:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:46.740 12:31:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:46.740 12:31:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:46.740 12:31:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:46.740 12:31:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:26:46.740 12:31:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:26:46.741 12:31:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:50.046 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:50.046 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:50.046 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:50.046 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:50.046 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:50.046 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:50.046 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:50.046 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:50.046 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:50.046 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:50.046 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:50.046 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:50.046 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:50.046 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:50.046 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:50.046 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:50.046 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:26:50.307 00:26:50.307 real 0m18.468s 00:26:50.307 user 0m4.768s 00:26:50.307 sys 0m10.585s 00:26:50.307 12:31:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:50.307 12:31:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:50.307 ************************************ 00:26:50.307 END TEST nvmf_identify_kernel_target 00:26:50.307 ************************************ 00:26:50.307 12:31:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:50.307 12:31:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:50.307 12:31:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:50.307 12:31:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.307 ************************************ 00:26:50.307 START TEST nvmf_auth_host 00:26:50.307 ************************************ 00:26:50.307 12:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:50.569 * Looking for test storage... 00:26:50.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:50.569 12:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:50.569 12:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:26:50.569 12:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:50.569 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:50.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.569 --rc genhtml_branch_coverage=1 00:26:50.569 --rc genhtml_function_coverage=1 00:26:50.569 --rc genhtml_legend=1 00:26:50.569 --rc geninfo_all_blocks=1 00:26:50.569 --rc geninfo_unexecuted_blocks=1 00:26:50.570 00:26:50.570 ' 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:50.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.570 --rc genhtml_branch_coverage=1 00:26:50.570 --rc genhtml_function_coverage=1 00:26:50.570 --rc genhtml_legend=1 00:26:50.570 --rc geninfo_all_blocks=1 00:26:50.570 --rc geninfo_unexecuted_blocks=1 00:26:50.570 00:26:50.570 ' 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:50.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.570 --rc genhtml_branch_coverage=1 00:26:50.570 --rc genhtml_function_coverage=1 00:26:50.570 --rc genhtml_legend=1 00:26:50.570 --rc geninfo_all_blocks=1 00:26:50.570 --rc geninfo_unexecuted_blocks=1 00:26:50.570 00:26:50.570 ' 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:50.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.570 --rc genhtml_branch_coverage=1 00:26:50.570 --rc genhtml_function_coverage=1 00:26:50.570 --rc genhtml_legend=1 00:26:50.570 --rc geninfo_all_blocks=1 00:26:50.570 --rc geninfo_unexecuted_blocks=1 00:26:50.570 00:26:50.570 ' 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:50.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:50.570 12:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:58.718 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:58.718 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:58.718 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.718 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:58.719 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.719 12:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:58.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:26:58.719 00:26:58.719 --- 10.0.0.2 ping statistics --- 00:26:58.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.719 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:26:58.719 00:26:58.719 --- 10.0.0.1 ping statistics --- 00:26:58.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.719 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=1792571 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 1792571 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1792571 ']' 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.719 12:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=a395c87d77fcc7523a1abe0370590ac6 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.g0g 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key a395c87d77fcc7523a1abe0370590ac6 0 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 a395c87d77fcc7523a1abe0370590ac6 0 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=a395c87d77fcc7523a1abe0370590ac6 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.g0g 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.g0g 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.g0g 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=14f840c771d4d1866dc47e7f1495b33ed2eca8e868cb7aee138408374e4bd626 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.vlj 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 14f840c771d4d1866dc47e7f1495b33ed2eca8e868cb7aee138408374e4bd626 3 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 14f840c771d4d1866dc47e7f1495b33ed2eca8e868cb7aee138408374e4bd626 3 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:58.719 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=14f840c771d4d1866dc47e7f1495b33ed2eca8e868cb7aee138408374e4bd626 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.vlj 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.vlj 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.vlj 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=4e9e21198e636a465f36294247b541c86aa579ebae1ce0be 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.L2J 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 4e9e21198e636a465f36294247b541c86aa579ebae1ce0be 0 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 4e9e21198e636a465f36294247b541c86aa579ebae1ce0be 0 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=4e9e21198e636a465f36294247b541c86aa579ebae1ce0be 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.L2J 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.L2J 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.L2J 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=1a006370c86f5679f74e89da6b32c978d1f889037d0c6385 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.p8L 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 1a006370c86f5679f74e89da6b32c978d1f889037d0c6385 2 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 1a006370c86f5679f74e89da6b32c978d1f889037d0c6385 2 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=1a006370c86f5679f74e89da6b32c978d1f889037d0c6385 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:58.720 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.p8L 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.p8L 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.p8L 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=1a51777c71b538f9f9411b7241a3a5e6 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.vyN 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 1a51777c71b538f9f9411b7241a3a5e6 1 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 1a51777c71b538f9f9411b7241a3a5e6 1 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=1a51777c71b538f9f9411b7241a3a5e6 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.vyN 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.vyN 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.vyN 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:58.980 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=f7cde64fa3def341e7ec5b791674634e 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.V2u 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key f7cde64fa3def341e7ec5b791674634e 1 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 f7cde64fa3def341e7ec5b791674634e 1 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=f7cde64fa3def341e7ec5b791674634e 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.V2u 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.V2u 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.V2u 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=09044d9a0ce59be8dd2fbd19cae7ffecc189e16bb870e79d 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.hWX 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 09044d9a0ce59be8dd2fbd19cae7ffecc189e16bb870e79d 2 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 09044d9a0ce59be8dd2fbd19cae7ffecc189e16bb870e79d 2 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=09044d9a0ce59be8dd2fbd19cae7ffecc189e16bb870e79d 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.hWX 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.hWX 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.hWX 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=30a8f0df0e8b50f0617e7566f7196912 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.jsy 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 30a8f0df0e8b50f0617e7566f7196912 0 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 30a8f0df0e8b50f0617e7566f7196912 0 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=30a8f0df0e8b50f0617e7566f7196912 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:58.981 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.jsy 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.jsy 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.jsy 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=8948348cc6857cadb30bd5a5833ccb4b5fb480eb87ff08c90b38efddb6a079e0 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.tgq 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 8948348cc6857cadb30bd5a5833ccb4b5fb480eb87ff08c90b38efddb6a079e0 3 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 8948348cc6857cadb30bd5a5833ccb4b5fb480eb87ff08c90b38efddb6a079e0 3 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=8948348cc6857cadb30bd5a5833ccb4b5fb480eb87ff08c90b38efddb6a079e0 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.tgq 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.tgq 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.tgq 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1792571 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1792571 ']' 00:26:59.241 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.242 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:59.242 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.242 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:59.242 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.242 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:59.242 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:59.242 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:59.242 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.g0g 00:26:59.242 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.242 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.vlj ]] 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vlj 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.L2J 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.p8L ]] 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.p8L 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.vyN 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.V2u ]] 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.V2u 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.hWX 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.jsy ]] 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.jsy 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.tgq 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.502 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:59.503 12:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:02.801 Waiting for block devices as requested 00:27:02.801 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:02.801 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:02.801 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:02.801 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:02.801 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:02.801 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:02.801 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:03.062 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:03.062 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:03.323 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:03.323 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:03.323 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:03.323 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:03.584 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:03.584 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:03.584 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:03.584 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:04.525 12:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:27:04.525 12:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:04.525 12:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:27:04.525 12:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:04.525 12:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:04.525 12:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:04.525 12:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:27:04.525 12:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:04.525 12:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:04.525 No valid GPT data, bailing 00:27:04.525 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:04.525 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:04.525 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:04.525 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:27:04.525 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:27:04.525 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:04.525 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:04.525 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:04.525 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:04.525 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:27:04.525 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:27:04.525 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:04.525 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:27:04.525 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:27:04.525 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:27:04.525 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:27:04.525 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:04.525 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:04.785 00:27:04.785 Discovery Log Number of Records 2, Generation counter 2 00:27:04.785 =====Discovery Log Entry 0====== 00:27:04.785 trtype: tcp 00:27:04.785 adrfam: ipv4 00:27:04.785 subtype: current discovery subsystem 00:27:04.785 treq: not specified, sq flow control disable supported 00:27:04.785 portid: 1 00:27:04.785 trsvcid: 4420 00:27:04.785 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:04.785 traddr: 10.0.0.1 00:27:04.785 eflags: none 00:27:04.785 sectype: none 00:27:04.785 =====Discovery Log Entry 1====== 00:27:04.785 trtype: tcp 00:27:04.785 adrfam: ipv4 00:27:04.785 subtype: nvme subsystem 00:27:04.785 treq: not specified, sq flow control disable supported 00:27:04.786 portid: 1 00:27:04.786 trsvcid: 4420 00:27:04.786 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:04.786 traddr: 10.0.0.1 00:27:04.786 eflags: none 00:27:04.786 sectype: none 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: ]] 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.786 nvme0n1 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.786 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:05.046 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: ]] 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.047 nvme0n1 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.047 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: ]] 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.306 nvme0n1 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.306 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: ]] 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.307 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.567 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.567 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.567 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:05.567 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:05.567 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:05.567 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.567 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.567 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:05.567 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.567 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:05.567 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:05.567 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:05.567 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:05.567 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.567 12:31:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.567 nvme0n1 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: ]] 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.567 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.827 nvme0n1 00:27:05.827 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.827 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.827 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.827 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.827 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.827 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.827 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.827 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.827 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.827 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.827 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.827 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.827 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:05.827 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.827 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.827 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.828 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.088 nvme0n1 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: ]] 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.088 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.089 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.089 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.089 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:06.089 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.089 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.349 nvme0n1 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: ]] 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.349 12:31:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.610 nvme0n1 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: ]] 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.610 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.871 nvme0n1 00:27:06.871 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.871 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.871 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.871 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.871 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.871 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.871 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.871 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.871 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: ]] 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.872 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.132 nvme0n1 00:27:07.132 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.132 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.132 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.132 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.133 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.394 nvme0n1 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: ]] 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.394 12:31:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.654 nvme0n1 00:27:07.654 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.654 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.654 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.654 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.654 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.654 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: ]] 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.916 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.178 nvme0n1 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: ]] 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.178 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.438 nvme0n1 00:27:08.438 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: ]] 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.439 12:31:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.699 nvme0n1 00:27:08.699 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.699 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.699 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.699 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.699 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.699 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.960 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.220 nvme0n1 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: ]] 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.220 12:31:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.790 nvme0n1 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: ]] 00:27:09.790 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.791 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.363 nvme0n1 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: ]] 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.363 12:31:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.935 nvme0n1 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: ]] 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.935 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.506 nvme0n1 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.506 12:31:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.767 nvme0n1 00:27:11.767 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.767 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.767 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.767 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.767 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.767 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: ]] 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.028 12:31:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.598 nvme0n1 00:27:12.598 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.598 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.598 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.598 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.598 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.598 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: ]] 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.859 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.429 nvme0n1 00:27:13.429 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.429 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.429 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.429 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.429 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.429 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.429 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.429 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.429 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.429 12:31:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: ]] 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:13.689 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.690 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:13.690 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.690 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.690 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.690 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.690 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:13.690 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:13.690 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:13.690 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.690 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.690 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:13.690 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.690 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:13.690 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:13.690 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:13.690 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:13.690 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.690 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.261 nvme0n1 00:27:14.261 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.261 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.261 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.261 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.261 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.261 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.261 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.261 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.261 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.261 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: ]] 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.522 12:31:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.092 nvme0n1 00:27:15.092 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.092 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.092 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.092 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.092 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.092 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.092 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.092 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.092 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.092 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.092 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.092 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.092 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.352 12:31:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.923 nvme0n1 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: ]] 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:15.923 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.924 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.184 nvme0n1 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: ]] 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:16.184 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.185 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.185 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.444 nvme0n1 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: ]] 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:16.444 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.445 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.445 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.445 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.445 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:16.445 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:16.445 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:16.445 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.445 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.445 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:16.445 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.445 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:16.445 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:16.445 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:16.445 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:16.445 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.445 12:31:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.704 nvme0n1 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: ]] 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.704 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:16.705 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:16.705 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:16.705 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.705 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.705 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:16.705 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.705 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:16.705 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:16.705 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:16.705 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:16.705 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.705 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.965 nvme0n1 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.965 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.225 nvme0n1 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: ]] 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:17.225 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.226 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.226 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.226 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.226 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:17.226 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:17.226 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:17.226 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.226 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.226 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:17.226 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.226 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:17.226 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:17.226 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:17.226 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:17.226 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.226 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.486 nvme0n1 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: ]] 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.486 12:31:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.746 nvme0n1 00:27:17.746 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.746 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.746 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.746 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.746 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.746 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.746 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.746 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.746 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.746 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.746 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.746 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.746 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:17.746 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.746 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.746 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: ]] 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.747 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.007 nvme0n1 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: ]] 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.007 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.267 nvme0n1 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.267 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.528 nvme0n1 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: ]] 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.528 12:31:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.789 nvme0n1 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: ]] 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.789 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.050 nvme0n1 00:27:19.050 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.050 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.050 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.050 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.050 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.050 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: ]] 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.310 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.571 nvme0n1 00:27:19.571 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.571 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.571 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.571 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.571 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.571 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.571 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.571 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.571 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.571 12:31:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: ]] 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.571 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.832 nvme0n1 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.832 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.833 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.093 nvme0n1 00:27:20.093 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.354 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.354 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.354 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.354 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.354 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.354 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.354 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.354 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.354 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.354 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.354 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:20.354 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.354 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:20.354 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.354 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.354 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:20.354 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:20.354 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: ]] 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.355 12:31:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.924 nvme0n1 00:27:20.924 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.924 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.924 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.924 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.924 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.924 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.924 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.924 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.924 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.924 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: ]] 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.925 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.185 nvme0n1 00:27:21.185 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.185 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.185 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.185 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.185 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: ]] 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.445 12:31:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.015 nvme0n1 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: ]] 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.015 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.275 nvme0n1 00:27:22.275 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.275 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.275 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.275 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.275 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.275 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.536 12:31:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.796 nvme0n1 00:27:22.796 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.796 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.796 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.796 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.796 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: ]] 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.055 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.056 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.056 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.056 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.056 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.056 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.056 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.056 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.056 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.056 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.056 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.056 12:31:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.625 nvme0n1 00:27:23.625 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.625 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.625 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.625 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.625 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: ]] 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.885 12:31:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.455 nvme0n1 00:27:24.455 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.455 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.455 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.455 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.455 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: ]] 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.715 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.284 nvme0n1 00:27:25.284 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.284 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.284 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.284 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.284 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: ]] 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.543 12:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.510 nvme0n1 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.510 12:32:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.079 nvme0n1 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: ]] 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.079 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.339 nvme0n1 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: ]] 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.339 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.601 nvme0n1 00:27:27.601 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.601 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.601 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.601 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.601 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.601 12:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: ]] 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.601 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.862 nvme0n1 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: ]] 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.862 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.123 nvme0n1 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.123 nvme0n1 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.123 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: ]] 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.384 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.645 nvme0n1 00:27:28.645 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.645 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.645 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.645 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.645 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.645 12:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: ]] 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.645 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.906 nvme0n1 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: ]] 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.906 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.168 nvme0n1 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: ]] 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.168 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.430 nvme0n1 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.430 12:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.692 nvme0n1 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: ]] 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.692 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.954 nvme0n1 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: ]] 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.954 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.525 nvme0n1 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: ]] 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.525 12:32:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.786 nvme0n1 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: ]] 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.786 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:30.787 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.787 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:30.787 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:30.787 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:30.787 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.787 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.787 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.047 nvme0n1 00:27:31.047 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.047 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.047 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.047 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.047 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.047 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.047 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.047 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.047 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.047 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.047 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.048 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.309 nvme0n1 00:27:31.309 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: ]] 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.590 12:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.912 nvme0n1 00:27:31.912 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.912 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.912 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.912 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.912 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.912 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: ]] 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.220 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.502 nvme0n1 00:27:32.502 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.502 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.502 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.502 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.502 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.502 12:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.502 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.502 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.502 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.502 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.502 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.502 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: ]] 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.503 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.075 nvme0n1 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: ]] 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.075 12:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.649 nvme0n1 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.649 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.221 nvme0n1 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM5NWM4N2Q3N2ZjYzc1MjNhMWFiZTAzNzA1OTBhYzYYkugL: 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: ]] 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTRmODQwYzc3MWQ0ZDE4NjZkYzQ3ZTdmMTQ5NWIzM2VkMmVjYThlODY4Y2I3YWVlMTM4NDA4Mzc0ZTRiZDYyNgRrLX0=: 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:34.221 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:34.222 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:34.222 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.222 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.222 12:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.165 nvme0n1 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: ]] 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.165 12:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.737 nvme0n1 00:27:35.737 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.737 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.737 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.737 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.737 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.997 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.997 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.997 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.997 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.997 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.997 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.997 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.997 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:35.997 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.997 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.997 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:35.997 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.997 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:35.997 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:35.997 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.997 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:35.997 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:35.997 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: ]] 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.998 12:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.569 nvme0n1 00:27:36.569 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDkwNDRkOWEwY2U1OWJlOGRkMmZiZDE5Y2FlN2ZmZWNjMTg5ZTE2YmI4NzBlNzlkGG+KdA==: 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: ]] 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzBhOGYwZGYwZThiNTBmMDYxN2U3NTY2ZjcxOTY5MTLW6RRN: 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.831 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.402 nvme0n1 00:27:37.663 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.663 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.663 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.663 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.663 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.663 12:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk0ODM0OGNjNjg1N2NhZGIzMGJkNWE1ODMzY2NiNGI1ZmI0ODBlYjg3ZmYwOGM5MGIzOGVmZGRiNmEwNzllMGdqIzQ=: 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.663 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.235 nvme0n1 00:27:38.496 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.496 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.496 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.496 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.496 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.496 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.496 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.496 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.496 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: ]] 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.497 request: 00:27:38.497 { 00:27:38.497 "name": "nvme0", 00:27:38.497 "trtype": "tcp", 00:27:38.497 "traddr": "10.0.0.1", 00:27:38.497 "adrfam": "ipv4", 00:27:38.497 "trsvcid": "4420", 00:27:38.497 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:38.497 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:38.497 "prchk_reftag": false, 00:27:38.497 "prchk_guard": false, 00:27:38.497 "hdgst": false, 00:27:38.497 "ddgst": false, 00:27:38.497 "allow_unrecognized_csi": false, 00:27:38.497 "method": "bdev_nvme_attach_controller", 00:27:38.497 "req_id": 1 00:27:38.497 } 00:27:38.497 Got JSON-RPC error response 00:27:38.497 response: 00:27:38.497 { 00:27:38.497 "code": -5, 00:27:38.497 "message": "Input/output error" 00:27:38.497 } 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:38.497 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:38.497 12:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:38.497 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:38.497 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.497 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.758 request: 00:27:38.758 { 00:27:38.758 "name": "nvme0", 00:27:38.758 "trtype": "tcp", 00:27:38.758 "traddr": "10.0.0.1", 00:27:38.758 "adrfam": "ipv4", 00:27:38.758 "trsvcid": "4420", 00:27:38.758 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:38.758 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:38.758 "prchk_reftag": false, 00:27:38.758 "prchk_guard": false, 00:27:38.758 "hdgst": false, 00:27:38.758 "ddgst": false, 00:27:38.758 "dhchap_key": "key2", 00:27:38.758 "allow_unrecognized_csi": false, 00:27:38.758 "method": "bdev_nvme_attach_controller", 00:27:38.758 "req_id": 1 00:27:38.758 } 00:27:38.758 Got JSON-RPC error response 00:27:38.758 response: 00:27:38.758 { 00:27:38.758 "code": -5, 00:27:38.758 "message": "Input/output error" 00:27:38.758 } 00:27:38.758 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:38.758 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:38.758 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:38.758 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:38.758 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:38.758 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.758 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:38.758 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.758 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.758 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.758 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:38.758 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:38.758 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:38.758 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:38.758 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.759 request: 00:27:38.759 { 00:27:38.759 "name": "nvme0", 00:27:38.759 "trtype": "tcp", 00:27:38.759 "traddr": "10.0.0.1", 00:27:38.759 "adrfam": "ipv4", 00:27:38.759 "trsvcid": "4420", 00:27:38.759 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:38.759 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:38.759 "prchk_reftag": false, 00:27:38.759 "prchk_guard": false, 00:27:38.759 "hdgst": false, 00:27:38.759 "ddgst": false, 00:27:38.759 "dhchap_key": "key1", 00:27:38.759 "dhchap_ctrlr_key": "ckey2", 00:27:38.759 "allow_unrecognized_csi": false, 00:27:38.759 "method": "bdev_nvme_attach_controller", 00:27:38.759 "req_id": 1 00:27:38.759 } 00:27:38.759 Got JSON-RPC error response 00:27:38.759 response: 00:27:38.759 { 00:27:38.759 "code": -5, 00:27:38.759 "message": "Input/output error" 00:27:38.759 } 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.759 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.020 nvme0n1 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: ]] 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.020 request: 00:27:39.020 { 00:27:39.020 "name": "nvme0", 00:27:39.020 "dhchap_key": "key1", 00:27:39.020 "dhchap_ctrlr_key": "ckey2", 00:27:39.020 "method": "bdev_nvme_set_keys", 00:27:39.020 "req_id": 1 00:27:39.020 } 00:27:39.020 Got JSON-RPC error response 00:27:39.020 response: 00:27:39.020 { 00:27:39.020 "code": -13, 00:27:39.020 "message": "Permission denied" 00:27:39.020 } 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.020 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.281 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:39.281 12:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:40.224 12:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.225 12:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:40.225 12:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.225 12:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.225 12:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.225 12:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:40.225 12:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:41.166 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.166 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGU5ZTIxMTk4ZTYzNmE0NjVmMzYyOTQyNDdiNTQxYzg2YWE1NzllYmFlMWNlMGJlnF82UA==: 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: ]] 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWEwMDYzNzBjODZmNTY3OWY3NGU4OWRhNmIzMmM5NzhkMWY4ODkwMzdkMGM2Mzg1nLCWgw==: 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.167 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.428 nvme0n1 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWE1MTc3N2M3MWI1MzhmOWY5NDExYjcyNDFhM2E1ZTZt/49Y: 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: ]] 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjdjZGU2NGZhM2RlZjM0MWU3ZWM1Yjc5MTY3NDYzNGXk/5dH: 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.428 request: 00:27:41.428 { 00:27:41.428 "name": "nvme0", 00:27:41.428 "dhchap_key": "key2", 00:27:41.428 "dhchap_ctrlr_key": "ckey1", 00:27:41.428 "method": "bdev_nvme_set_keys", 00:27:41.428 "req_id": 1 00:27:41.428 } 00:27:41.428 Got JSON-RPC error response 00:27:41.428 response: 00:27:41.428 { 00:27:41.428 "code": -13, 00:27:41.428 "message": "Permission denied" 00:27:41.428 } 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.428 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:41.429 12:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:42.814 12:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.814 12:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:42.814 12:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.814 12:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:42.814 rmmod nvme_tcp 00:27:42.814 rmmod nvme_fabrics 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 1792571 ']' 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 1792571 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1792571 ']' 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1792571 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1792571 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1792571' 00:27:42.814 killing process with pid 1792571 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1792571 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1792571 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:42.814 12:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.360 12:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:45.360 12:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:45.360 12:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:45.361 12:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:45.361 12:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:45.361 12:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:27:45.361 12:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:45.361 12:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:45.361 12:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:45.361 12:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:45.361 12:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:27:45.361 12:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:27:45.361 12:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:48.664 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:48.664 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:48.664 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:48.664 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:48.664 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:48.664 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:48.664 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:48.664 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:48.664 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:48.664 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:48.664 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:48.664 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:48.664 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:48.664 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:48.664 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:48.664 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:48.664 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:48.926 12:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.g0g /tmp/spdk.key-null.L2J /tmp/spdk.key-sha256.vyN /tmp/spdk.key-sha384.hWX /tmp/spdk.key-sha512.tgq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:48.926 12:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:52.231 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:52.231 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:52.231 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:52.231 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:52.231 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:52.231 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:52.492 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:52.492 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:52.492 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:52.492 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:52.492 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:52.492 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:52.492 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:52.492 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:52.492 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:52.492 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:52.492 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:52.753 00:27:52.753 real 1m2.388s 00:27:52.753 user 0m55.737s 00:27:52.753 sys 0m15.260s 00:27:52.753 12:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:52.753 12:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.753 ************************************ 00:27:52.753 END TEST nvmf_auth_host 00:27:52.753 ************************************ 00:27:52.753 12:32:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:52.753 12:32:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:52.753 12:32:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:52.753 12:32:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:52.753 12:32:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.753 ************************************ 00:27:52.753 START TEST nvmf_digest 00:27:52.753 ************************************ 00:27:52.753 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:53.016 * Looking for test storage... 00:27:53.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:53.016 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:53.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.017 --rc genhtml_branch_coverage=1 00:27:53.017 --rc genhtml_function_coverage=1 00:27:53.017 --rc genhtml_legend=1 00:27:53.017 --rc geninfo_all_blocks=1 00:27:53.017 --rc geninfo_unexecuted_blocks=1 00:27:53.017 00:27:53.017 ' 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:53.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.017 --rc genhtml_branch_coverage=1 00:27:53.017 --rc genhtml_function_coverage=1 00:27:53.017 --rc genhtml_legend=1 00:27:53.017 --rc geninfo_all_blocks=1 00:27:53.017 --rc geninfo_unexecuted_blocks=1 00:27:53.017 00:27:53.017 ' 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:53.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.017 --rc genhtml_branch_coverage=1 00:27:53.017 --rc genhtml_function_coverage=1 00:27:53.017 --rc genhtml_legend=1 00:27:53.017 --rc geninfo_all_blocks=1 00:27:53.017 --rc geninfo_unexecuted_blocks=1 00:27:53.017 00:27:53.017 ' 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:53.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.017 --rc genhtml_branch_coverage=1 00:27:53.017 --rc genhtml_function_coverage=1 00:27:53.017 --rc genhtml_legend=1 00:27:53.017 --rc geninfo_all_blocks=1 00:27:53.017 --rc geninfo_unexecuted_blocks=1 00:27:53.017 00:27:53.017 ' 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:53.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:53.017 12:32:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:01.158 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:01.158 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:01.158 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:01.158 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:01.158 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:01.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:01.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:28:01.159 00:28:01.159 --- 10.0.0.2 ping statistics --- 00:28:01.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.159 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:01.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:01.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:28:01.159 00:28:01.159 --- 10.0.0.1 ping statistics --- 00:28:01.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.159 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:01.159 ************************************ 00:28:01.159 START TEST nvmf_digest_clean 00:28:01.159 ************************************ 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=1809956 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 1809956 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1809956 ']' 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:01.159 12:32:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:01.159 [2024-11-04 12:32:34.803792] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:28:01.159 [2024-11-04 12:32:34.803861] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.159 [2024-11-04 12:32:34.878972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.159 [2024-11-04 12:32:34.921791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.159 [2024-11-04 12:32:34.921830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.159 [2024-11-04 12:32:34.921839] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.159 [2024-11-04 12:32:34.921846] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.159 [2024-11-04 12:32:34.921852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.159 [2024-11-04 12:32:34.922485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.159 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:01.159 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:01.159 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:01.159 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:01.159 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:01.159 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.159 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:01.159 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:01.159 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:01.159 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.159 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:01.159 null0 00:28:01.159 [2024-11-04 12:32:35.706446] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.421 [2024-11-04 12:32:35.730637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.421 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.421 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:01.421 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:01.421 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:01.421 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:01.421 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:01.421 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:01.421 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:01.421 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1810141 00:28:01.421 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1810141 /var/tmp/bperf.sock 00:28:01.421 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1810141 ']' 00:28:01.421 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:01.421 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:01.421 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:01.421 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:01.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:01.421 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:01.421 12:32:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:01.421 [2024-11-04 12:32:35.788696] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:28:01.421 [2024-11-04 12:32:35.788744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1810141 ] 00:28:01.421 [2024-11-04 12:32:35.864986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.421 [2024-11-04 12:32:35.900655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.365 12:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:02.365 12:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:02.365 12:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:02.365 12:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:02.365 12:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:02.365 12:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:02.365 12:32:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:02.626 nvme0n1 00:28:02.887 12:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:02.887 12:32:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:02.887 Running I/O for 2 seconds... 00:28:04.771 19738.00 IOPS, 77.10 MiB/s [2024-11-04T11:32:39.341Z] 19680.50 IOPS, 76.88 MiB/s 00:28:04.771 Latency(us) 00:28:04.771 [2024-11-04T11:32:39.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.771 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:04.771 nvme0n1 : 2.00 19716.48 77.02 0.00 0.00 6486.29 2990.08 20425.39 00:28:04.771 [2024-11-04T11:32:39.341Z] =================================================================================================================== 00:28:04.771 [2024-11-04T11:32:39.341Z] Total : 19716.48 77.02 0.00 0.00 6486.29 2990.08 20425.39 00:28:04.771 { 00:28:04.771 "results": [ 00:28:04.771 { 00:28:04.771 "job": "nvme0n1", 00:28:04.771 "core_mask": "0x2", 00:28:04.771 "workload": "randread", 00:28:04.772 "status": "finished", 00:28:04.772 "queue_depth": 128, 00:28:04.772 "io_size": 4096, 00:28:04.772 "runtime": 2.002842, 00:28:04.772 "iops": 19716.482877830604, 00:28:04.772 "mibps": 77.0175112415258, 00:28:04.772 "io_failed": 0, 00:28:04.772 "io_timeout": 0, 00:28:04.772 "avg_latency_us": 6486.2915802713, 00:28:04.772 "min_latency_us": 2990.08, 00:28:04.772 "max_latency_us": 20425.386666666665 00:28:04.772 } 00:28:04.772 ], 00:28:04.772 "core_count": 1 00:28:04.772 } 00:28:04.772 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:04.772 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:04.772 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:04.772 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:04.772 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:04.772 | select(.opcode=="crc32c") 00:28:04.772 | "\(.module_name) \(.executed)"' 00:28:05.033 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:05.033 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:05.033 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:05.033 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:05.033 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1810141 00:28:05.033 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1810141 ']' 00:28:05.033 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1810141 00:28:05.033 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:05.033 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:05.033 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1810141 00:28:05.033 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:05.033 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:05.033 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1810141' 00:28:05.033 killing process with pid 1810141 00:28:05.033 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1810141 00:28:05.033 Received shutdown signal, test time was about 2.000000 seconds 00:28:05.033 00:28:05.033 Latency(us) 00:28:05.033 [2024-11-04T11:32:39.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.033 [2024-11-04T11:32:39.603Z] =================================================================================================================== 00:28:05.033 [2024-11-04T11:32:39.603Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:05.033 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1810141 00:28:05.294 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:05.294 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:05.294 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:05.294 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:05.294 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:05.294 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:05.294 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:05.294 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1810833 00:28:05.294 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1810833 /var/tmp/bperf.sock 00:28:05.294 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1810833 ']' 00:28:05.294 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:05.294 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:05.294 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:05.294 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:05.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:05.294 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:05.294 12:32:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:05.294 [2024-11-04 12:32:39.712725] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:28:05.294 [2024-11-04 12:32:39.712811] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1810833 ] 00:28:05.294 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:05.294 Zero copy mechanism will not be used. 00:28:05.294 [2024-11-04 12:32:39.789445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.294 [2024-11-04 12:32:39.819103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.237 12:32:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:06.237 12:32:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:06.237 12:32:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:06.237 12:32:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:06.238 12:32:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:06.238 12:32:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.238 12:32:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.808 nvme0n1 00:28:06.808 12:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:06.808 12:32:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:06.808 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:06.808 Zero copy mechanism will not be used. 00:28:06.808 Running I/O for 2 seconds... 00:28:08.695 3626.00 IOPS, 453.25 MiB/s [2024-11-04T11:32:43.265Z] 4008.50 IOPS, 501.06 MiB/s 00:28:08.695 Latency(us) 00:28:08.695 [2024-11-04T11:32:43.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.695 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:08.695 nvme0n1 : 2.00 4012.01 501.50 0.00 0.00 3985.70 394.24 12724.91 00:28:08.695 [2024-11-04T11:32:43.265Z] =================================================================================================================== 00:28:08.695 [2024-11-04T11:32:43.265Z] Total : 4012.01 501.50 0.00 0.00 3985.70 394.24 12724.91 00:28:08.695 { 00:28:08.695 "results": [ 00:28:08.695 { 00:28:08.695 "job": "nvme0n1", 00:28:08.695 "core_mask": "0x2", 00:28:08.695 "workload": "randread", 00:28:08.695 "status": "finished", 00:28:08.695 "queue_depth": 16, 00:28:08.695 "io_size": 131072, 00:28:08.695 "runtime": 2.002237, 00:28:08.695 "iops": 4012.012563947225, 00:28:08.695 "mibps": 501.5015704934031, 00:28:08.695 "io_failed": 0, 00:28:08.695 "io_timeout": 0, 00:28:08.695 "avg_latency_us": 3985.7007444292294, 00:28:08.695 "min_latency_us": 394.24, 00:28:08.695 "max_latency_us": 12724.906666666666 00:28:08.695 } 00:28:08.695 ], 00:28:08.695 "core_count": 1 00:28:08.695 } 00:28:08.695 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:08.695 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:08.695 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:08.695 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:08.695 | select(.opcode=="crc32c") 00:28:08.695 | "\(.module_name) \(.executed)"' 00:28:08.695 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:08.956 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:08.956 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:08.956 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:08.956 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:08.956 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1810833 00:28:08.956 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1810833 ']' 00:28:08.956 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1810833 00:28:08.956 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:08.956 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:08.956 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1810833 00:28:08.956 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:08.956 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:08.956 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1810833' 00:28:08.956 killing process with pid 1810833 00:28:08.956 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1810833 00:28:08.956 Received shutdown signal, test time was about 2.000000 seconds 00:28:08.956 00:28:08.956 Latency(us) 00:28:08.956 [2024-11-04T11:32:43.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.956 [2024-11-04T11:32:43.526Z] =================================================================================================================== 00:28:08.956 [2024-11-04T11:32:43.526Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:08.956 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1810833 00:28:09.216 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:09.216 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:09.216 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:09.216 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:09.216 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:09.216 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:09.216 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:09.216 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1811689 00:28:09.216 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1811689 /var/tmp/bperf.sock 00:28:09.216 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1811689 ']' 00:28:09.216 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:09.216 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:09.216 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:09.216 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:09.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:09.216 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:09.216 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:09.216 [2024-11-04 12:32:43.658776] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:28:09.216 [2024-11-04 12:32:43.658835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1811689 ] 00:28:09.216 [2024-11-04 12:32:43.734047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.216 [2024-11-04 12:32:43.763701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.475 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:09.475 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:09.475 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:09.475 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:09.475 12:32:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:09.475 12:32:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.476 12:32:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.736 nvme0n1 00:28:09.736 12:32:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:09.736 12:32:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:09.996 Running I/O for 2 seconds... 00:28:11.879 21560.00 IOPS, 84.22 MiB/s [2024-11-04T11:32:46.449Z] 21666.00 IOPS, 84.63 MiB/s 00:28:11.879 Latency(us) 00:28:11.879 [2024-11-04T11:32:46.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.879 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:11.879 nvme0n1 : 2.00 21680.63 84.69 0.00 0.00 5896.27 2143.57 16711.68 00:28:11.879 [2024-11-04T11:32:46.449Z] =================================================================================================================== 00:28:11.879 [2024-11-04T11:32:46.449Z] Total : 21680.63 84.69 0.00 0.00 5896.27 2143.57 16711.68 00:28:11.879 { 00:28:11.879 "results": [ 00:28:11.879 { 00:28:11.879 "job": "nvme0n1", 00:28:11.879 "core_mask": "0x2", 00:28:11.879 "workload": "randwrite", 00:28:11.879 "status": "finished", 00:28:11.879 "queue_depth": 128, 00:28:11.879 "io_size": 4096, 00:28:11.879 "runtime": 2.004554, 00:28:11.879 "iops": 21680.63319820768, 00:28:11.879 "mibps": 84.68997343049875, 00:28:11.879 "io_failed": 0, 00:28:11.879 "io_timeout": 0, 00:28:11.879 "avg_latency_us": 5896.272424605001, 00:28:11.879 "min_latency_us": 2143.5733333333333, 00:28:11.879 "max_latency_us": 16711.68 00:28:11.879 } 00:28:11.879 ], 00:28:11.879 "core_count": 1 00:28:11.879 } 00:28:11.879 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:11.879 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:11.879 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:11.879 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:11.879 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:11.879 | select(.opcode=="crc32c") 00:28:11.879 | "\(.module_name) \(.executed)"' 00:28:12.139 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:12.139 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:12.139 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:12.139 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:12.140 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1811689 00:28:12.140 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1811689 ']' 00:28:12.140 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1811689 00:28:12.140 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:12.140 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:12.140 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1811689 00:28:12.140 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:12.140 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:12.140 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1811689' 00:28:12.140 killing process with pid 1811689 00:28:12.140 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1811689 00:28:12.140 Received shutdown signal, test time was about 2.000000 seconds 00:28:12.140 00:28:12.140 Latency(us) 00:28:12.140 [2024-11-04T11:32:46.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.140 [2024-11-04T11:32:46.710Z] =================================================================================================================== 00:28:12.140 [2024-11-04T11:32:46.710Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:12.140 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1811689 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1812198 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1812198 /var/tmp/bperf.sock 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1812198 ']' 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:12.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:12.400 [2024-11-04 12:32:46.774401] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:28:12.400 [2024-11-04 12:32:46.774460] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1812198 ] 00:28:12.400 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:12.400 Zero copy mechanism will not be used. 00:28:12.400 [2024-11-04 12:32:46.851034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.400 [2024-11-04 12:32:46.880626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:12.400 12:32:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:12.661 12:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.661 12:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.922 nvme0n1 00:28:12.922 12:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:12.922 12:32:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:13.182 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:13.182 Zero copy mechanism will not be used. 00:28:13.182 Running I/O for 2 seconds... 00:28:15.064 4880.00 IOPS, 610.00 MiB/s [2024-11-04T11:32:49.634Z] 4606.00 IOPS, 575.75 MiB/s 00:28:15.064 Latency(us) 00:28:15.064 [2024-11-04T11:32:49.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.064 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:15.064 nvme0n1 : 2.01 4604.46 575.56 0.00 0.00 3469.58 1426.77 6526.29 00:28:15.064 [2024-11-04T11:32:49.634Z] =================================================================================================================== 00:28:15.064 [2024-11-04T11:32:49.634Z] Total : 4604.46 575.56 0.00 0.00 3469.58 1426.77 6526.29 00:28:15.064 { 00:28:15.064 "results": [ 00:28:15.064 { 00:28:15.064 "job": "nvme0n1", 00:28:15.064 "core_mask": "0x2", 00:28:15.064 "workload": "randwrite", 00:28:15.064 "status": "finished", 00:28:15.064 "queue_depth": 16, 00:28:15.064 "io_size": 131072, 00:28:15.064 "runtime": 2.005013, 00:28:15.064 "iops": 4604.458923707726, 00:28:15.064 "mibps": 575.5573654634658, 00:28:15.064 "io_failed": 0, 00:28:15.064 "io_timeout": 0, 00:28:15.064 "avg_latency_us": 3469.5830849220106, 00:28:15.064 "min_latency_us": 1426.7733333333333, 00:28:15.064 "max_latency_us": 6526.293333333333 00:28:15.064 } 00:28:15.064 ], 00:28:15.064 "core_count": 1 00:28:15.064 } 00:28:15.064 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:15.064 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:15.064 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:15.064 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:15.064 | select(.opcode=="crc32c") 00:28:15.064 | "\(.module_name) \(.executed)"' 00:28:15.064 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:15.324 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:15.324 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:15.324 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:15.324 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:15.324 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1812198 00:28:15.324 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1812198 ']' 00:28:15.324 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1812198 00:28:15.324 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:15.324 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:15.324 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1812198 00:28:15.324 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:15.324 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:15.324 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1812198' 00:28:15.324 killing process with pid 1812198 00:28:15.324 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1812198 00:28:15.324 Received shutdown signal, test time was about 2.000000 seconds 00:28:15.324 00:28:15.324 Latency(us) 00:28:15.324 [2024-11-04T11:32:49.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.324 [2024-11-04T11:32:49.894Z] =================================================================================================================== 00:28:15.324 [2024-11-04T11:32:49.894Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:15.324 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1812198 00:28:15.584 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1809956 00:28:15.584 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1809956 ']' 00:28:15.584 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1809956 00:28:15.584 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:15.584 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:15.584 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1809956 00:28:15.584 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:15.584 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:15.584 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1809956' 00:28:15.584 killing process with pid 1809956 00:28:15.584 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1809956 00:28:15.584 12:32:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1809956 00:28:15.584 00:28:15.584 real 0m15.348s 00:28:15.584 user 0m30.202s 00:28:15.584 sys 0m3.423s 00:28:15.584 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:15.584 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:15.584 ************************************ 00:28:15.584 END TEST nvmf_digest_clean 00:28:15.584 ************************************ 00:28:15.584 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:15.584 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:15.584 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:15.584 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:15.846 ************************************ 00:28:15.846 START TEST nvmf_digest_error 00:28:15.846 ************************************ 00:28:15.846 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:28:15.846 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:15.846 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:15.846 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:15.846 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.846 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=1812896 00:28:15.846 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 1812896 00:28:15.846 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:15.846 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1812896 ']' 00:28:15.846 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.846 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:15.846 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.846 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:15.846 12:32:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.846 [2024-11-04 12:32:50.227399] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:28:15.846 [2024-11-04 12:32:50.227460] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.846 [2024-11-04 12:32:50.299776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.846 [2024-11-04 12:32:50.340058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.846 [2024-11-04 12:32:50.340095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.846 [2024-11-04 12:32:50.340104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:15.846 [2024-11-04 12:32:50.340111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:15.846 [2024-11-04 12:32:50.340118] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.846 [2024-11-04 12:32:50.340709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.789 [2024-11-04 12:32:51.058817] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.789 null0 00:28:16.789 [2024-11-04 12:32:51.140807] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.789 [2024-11-04 12:32:51.165003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1813243 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1813243 /var/tmp/bperf.sock 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1813243 ']' 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:16.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:16.789 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.789 [2024-11-04 12:32:51.219318] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:28:16.789 [2024-11-04 12:32:51.219368] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1813243 ] 00:28:16.789 [2024-11-04 12:32:51.295525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.789 [2024-11-04 12:32:51.325198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.050 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:17.050 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:17.050 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:17.050 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:17.050 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:17.050 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.050 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.050 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.050 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.050 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.311 nvme0n1 00:28:17.311 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:17.311 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.311 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.572 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.572 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:17.572 12:32:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:17.572 Running I/O for 2 seconds... 00:28:17.572 [2024-11-04 12:32:51.993023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.572 [2024-11-04 12:32:51.993054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.572 [2024-11-04 12:32:51.993063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.572 [2024-11-04 12:32:52.003890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.572 [2024-11-04 12:32:52.003908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.572 [2024-11-04 12:32:52.003914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.572 [2024-11-04 12:32:52.017833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.572 [2024-11-04 12:32:52.017852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.573 [2024-11-04 12:32:52.017859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.573 [2024-11-04 12:32:52.030114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.573 [2024-11-04 12:32:52.030133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.573 [2024-11-04 12:32:52.030140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.573 [2024-11-04 12:32:52.042743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.573 [2024-11-04 12:32:52.042765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.573 [2024-11-04 12:32:52.042771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.573 [2024-11-04 12:32:52.054767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.573 [2024-11-04 12:32:52.054786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.573 [2024-11-04 12:32:52.054792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.573 [2024-11-04 12:32:52.067964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.573 [2024-11-04 12:32:52.067982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.573 [2024-11-04 12:32:52.067989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.573 [2024-11-04 12:32:52.080396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.573 [2024-11-04 12:32:52.080414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.573 [2024-11-04 12:32:52.080427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.573 [2024-11-04 12:32:52.092646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.573 [2024-11-04 12:32:52.092665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.573 [2024-11-04 12:32:52.092672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.573 [2024-11-04 12:32:52.104721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.573 [2024-11-04 12:32:52.104739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.573 [2024-11-04 12:32:52.104748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.573 [2024-11-04 12:32:52.118062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.573 [2024-11-04 12:32:52.118081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.573 [2024-11-04 12:32:52.118087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.573 [2024-11-04 12:32:52.128876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.573 [2024-11-04 12:32:52.128894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.573 [2024-11-04 12:32:52.128901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.835 [2024-11-04 12:32:52.141172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.835 [2024-11-04 12:32:52.141191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.835 [2024-11-04 12:32:52.141198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.835 [2024-11-04 12:32:52.154180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.835 [2024-11-04 12:32:52.154198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.835 [2024-11-04 12:32:52.154205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.835 [2024-11-04 12:32:52.168429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.835 [2024-11-04 12:32:52.168447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.835 [2024-11-04 12:32:52.168454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.835 [2024-11-04 12:32:52.180401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.835 [2024-11-04 12:32:52.180419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.835 [2024-11-04 12:32:52.180425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.835 [2024-11-04 12:32:52.190869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.835 [2024-11-04 12:32:52.190887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.835 [2024-11-04 12:32:52.190894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.835 [2024-11-04 12:32:52.203795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.835 [2024-11-04 12:32:52.203814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.835 [2024-11-04 12:32:52.203821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.835 [2024-11-04 12:32:52.216999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.835 [2024-11-04 12:32:52.217017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.835 [2024-11-04 12:32:52.217024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.835 [2024-11-04 12:32:52.229566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.835 [2024-11-04 12:32:52.229583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.835 [2024-11-04 12:32:52.229589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.835 [2024-11-04 12:32:52.242344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.835 [2024-11-04 12:32:52.242362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.835 [2024-11-04 12:32:52.242368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.835 [2024-11-04 12:32:52.255453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.835 [2024-11-04 12:32:52.255472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.835 [2024-11-04 12:32:52.255478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.835 [2024-11-04 12:32:52.266899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.835 [2024-11-04 12:32:52.266916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.835 [2024-11-04 12:32:52.266923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.835 [2024-11-04 12:32:52.278828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.835 [2024-11-04 12:32:52.278846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.835 [2024-11-04 12:32:52.278852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.835 [2024-11-04 12:32:52.292062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.835 [2024-11-04 12:32:52.292080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.835 [2024-11-04 12:32:52.292090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.835 [2024-11-04 12:32:52.304676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.835 [2024-11-04 12:32:52.304694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.835 [2024-11-04 12:32:52.304701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.835 [2024-11-04 12:32:52.316859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.835 [2024-11-04 12:32:52.316878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.836 [2024-11-04 12:32:52.316884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.836 [2024-11-04 12:32:52.328638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.836 [2024-11-04 12:32:52.328656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.836 [2024-11-04 12:32:52.328663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.836 [2024-11-04 12:32:52.340741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.836 [2024-11-04 12:32:52.340764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.836 [2024-11-04 12:32:52.340771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.836 [2024-11-04 12:32:52.354982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.836 [2024-11-04 12:32:52.355002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.836 [2024-11-04 12:32:52.355008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.836 [2024-11-04 12:32:52.366815] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.836 [2024-11-04 12:32:52.366833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.836 [2024-11-04 12:32:52.366840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.836 [2024-11-04 12:32:52.379368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.836 [2024-11-04 12:32:52.379385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.836 [2024-11-04 12:32:52.379392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.836 [2024-11-04 12:32:52.391203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:17.836 [2024-11-04 12:32:52.391222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.836 [2024-11-04 12:32:52.391228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.098 [2024-11-04 12:32:52.403781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.098 [2024-11-04 12:32:52.403803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.098 [2024-11-04 12:32:52.403809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.098 [2024-11-04 12:32:52.415713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.098 [2024-11-04 12:32:52.415731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.098 [2024-11-04 12:32:52.415738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.098 [2024-11-04 12:32:52.428224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.098 [2024-11-04 12:32:52.428242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.098 [2024-11-04 12:32:52.428249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.098 [2024-11-04 12:32:52.441029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.098 [2024-11-04 12:32:52.441047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.098 [2024-11-04 12:32:52.441053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.098 [2024-11-04 12:32:52.453650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.098 [2024-11-04 12:32:52.453668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.098 [2024-11-04 12:32:52.453674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.098 [2024-11-04 12:32:52.466729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.098 [2024-11-04 12:32:52.466752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.098 [2024-11-04 12:32:52.466760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.098 [2024-11-04 12:32:52.478302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.098 [2024-11-04 12:32:52.478320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.098 [2024-11-04 12:32:52.478326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.098 [2024-11-04 12:32:52.491022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.098 [2024-11-04 12:32:52.491040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.098 [2024-11-04 12:32:52.491047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.098 [2024-11-04 12:32:52.503561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.098 [2024-11-04 12:32:52.503579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.098 [2024-11-04 12:32:52.503585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.098 [2024-11-04 12:32:52.516392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.099 [2024-11-04 12:32:52.516410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.099 [2024-11-04 12:32:52.516416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.099 [2024-11-04 12:32:52.527946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.099 [2024-11-04 12:32:52.527964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.099 [2024-11-04 12:32:52.527970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.099 [2024-11-04 12:32:52.541934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.099 [2024-11-04 12:32:52.541952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.099 [2024-11-04 12:32:52.541959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.099 [2024-11-04 12:32:52.551386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.099 [2024-11-04 12:32:52.551404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.099 [2024-11-04 12:32:52.551411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.099 [2024-11-04 12:32:52.564976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.099 [2024-11-04 12:32:52.564994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.099 [2024-11-04 12:32:52.565001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.099 [2024-11-04 12:32:52.579079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.099 [2024-11-04 12:32:52.579097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.099 [2024-11-04 12:32:52.579104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.099 [2024-11-04 12:32:52.589417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.099 [2024-11-04 12:32:52.589435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.099 [2024-11-04 12:32:52.589441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.099 [2024-11-04 12:32:52.603072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.099 [2024-11-04 12:32:52.603091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.099 [2024-11-04 12:32:52.603097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.099 [2024-11-04 12:32:52.615939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.099 [2024-11-04 12:32:52.615956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.099 [2024-11-04 12:32:52.615966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.099 [2024-11-04 12:32:52.629274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.099 [2024-11-04 12:32:52.629292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.099 [2024-11-04 12:32:52.629299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.099 [2024-11-04 12:32:52.641440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.099 [2024-11-04 12:32:52.641458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.099 [2024-11-04 12:32:52.641465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.099 [2024-11-04 12:32:52.654563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.099 [2024-11-04 12:32:52.654581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.099 [2024-11-04 12:32:52.654588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.099 [2024-11-04 12:32:52.665181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.099 [2024-11-04 12:32:52.665198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.099 [2024-11-04 12:32:52.665205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.360 [2024-11-04 12:32:52.679203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.360 [2024-11-04 12:32:52.679220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.360 [2024-11-04 12:32:52.679227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.360 [2024-11-04 12:32:52.692133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.360 [2024-11-04 12:32:52.692151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.360 [2024-11-04 12:32:52.692158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.360 [2024-11-04 12:32:52.703713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.360 [2024-11-04 12:32:52.703730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.360 [2024-11-04 12:32:52.703737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.360 [2024-11-04 12:32:52.715918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.360 [2024-11-04 12:32:52.715935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.360 [2024-11-04 12:32:52.715942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.360 [2024-11-04 12:32:52.728168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.360 [2024-11-04 12:32:52.728188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.360 [2024-11-04 12:32:52.728195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.360 [2024-11-04 12:32:52.741477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.360 [2024-11-04 12:32:52.741495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.360 [2024-11-04 12:32:52.741501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.360 [2024-11-04 12:32:52.753409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.360 [2024-11-04 12:32:52.753427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.360 [2024-11-04 12:32:52.753434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.360 [2024-11-04 12:32:52.765847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.360 [2024-11-04 12:32:52.765866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.360 [2024-11-04 12:32:52.765873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.360 [2024-11-04 12:32:52.779274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.360 [2024-11-04 12:32:52.779292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.360 [2024-11-04 12:32:52.779299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.360 [2024-11-04 12:32:52.790192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.360 [2024-11-04 12:32:52.790211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.360 [2024-11-04 12:32:52.790217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.360 [2024-11-04 12:32:52.804898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.360 [2024-11-04 12:32:52.804916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.360 [2024-11-04 12:32:52.804923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.360 [2024-11-04 12:32:52.817752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.360 [2024-11-04 12:32:52.817770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.360 [2024-11-04 12:32:52.817776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.360 [2024-11-04 12:32:52.830642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.360 [2024-11-04 12:32:52.830659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.360 [2024-11-04 12:32:52.830666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.360 [2024-11-04 12:32:52.842137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.360 [2024-11-04 12:32:52.842154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.360 [2024-11-04 12:32:52.842161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.360 [2024-11-04 12:32:52.855376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.360 [2024-11-04 12:32:52.855393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.360 [2024-11-04 12:32:52.855400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.360 [2024-11-04 12:32:52.868511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.360 [2024-11-04 12:32:52.868529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.360 [2024-11-04 12:32:52.868535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.360 [2024-11-04 12:32:52.880082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.361 [2024-11-04 12:32:52.880098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.361 [2024-11-04 12:32:52.880105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.361 [2024-11-04 12:32:52.892230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.361 [2024-11-04 12:32:52.892247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.361 [2024-11-04 12:32:52.892253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.361 [2024-11-04 12:32:52.905495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.361 [2024-11-04 12:32:52.905512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.361 [2024-11-04 12:32:52.905518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.361 [2024-11-04 12:32:52.916536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.361 [2024-11-04 12:32:52.916554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.361 [2024-11-04 12:32:52.916560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.621 [2024-11-04 12:32:52.928764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.621 [2024-11-04 12:32:52.928782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.621 [2024-11-04 12:32:52.928789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.621 [2024-11-04 12:32:52.941242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.621 [2024-11-04 12:32:52.941259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.621 [2024-11-04 12:32:52.941269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.621 [2024-11-04 12:32:52.954469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.621 [2024-11-04 12:32:52.954487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.621 [2024-11-04 12:32:52.954493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.621 [2024-11-04 12:32:52.966508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.621 [2024-11-04 12:32:52.966526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.621 [2024-11-04 12:32:52.966533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.621 20170.00 IOPS, 78.79 MiB/s [2024-11-04T11:32:53.191Z] [2024-11-04 12:32:52.978517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.621 [2024-11-04 12:32:52.978536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.621 [2024-11-04 12:32:52.978542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.621 [2024-11-04 12:32:52.992929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.621 [2024-11-04 12:32:52.992946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.621 [2024-11-04 12:32:52.992953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.621 [2024-11-04 12:32:53.005019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.621 [2024-11-04 12:32:53.005037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.621 [2024-11-04 12:32:53.005043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.621 [2024-11-04 12:32:53.016404] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.621 [2024-11-04 12:32:53.016422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.622 [2024-11-04 12:32:53.016429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.622 [2024-11-04 12:32:53.029439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.622 [2024-11-04 12:32:53.029456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.622 [2024-11-04 12:32:53.029463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.622 [2024-11-04 12:32:53.042916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.622 [2024-11-04 12:32:53.042933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.622 [2024-11-04 12:32:53.042940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.622 [2024-11-04 12:32:53.053865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.622 [2024-11-04 12:32:53.053883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.622 [2024-11-04 12:32:53.053889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.622 [2024-11-04 12:32:53.066388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.622 [2024-11-04 12:32:53.066406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.622 [2024-11-04 12:32:53.066412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.622 [2024-11-04 12:32:53.079602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.622 [2024-11-04 12:32:53.079620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.622 [2024-11-04 12:32:53.079627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.622 [2024-11-04 12:32:53.092762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.622 [2024-11-04 12:32:53.092779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.622 [2024-11-04 12:32:53.092786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.622 [2024-11-04 12:32:53.102592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.622 [2024-11-04 12:32:53.102609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.622 [2024-11-04 12:32:53.102616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.622 [2024-11-04 12:32:53.115805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.622 [2024-11-04 12:32:53.115823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.622 [2024-11-04 12:32:53.115830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.622 [2024-11-04 12:32:53.129090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.622 [2024-11-04 12:32:53.129107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.622 [2024-11-04 12:32:53.129114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.622 [2024-11-04 12:32:53.142897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.622 [2024-11-04 12:32:53.142914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.622 [2024-11-04 12:32:53.142921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.622 [2024-11-04 12:32:53.152598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.622 [2024-11-04 12:32:53.152616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.622 [2024-11-04 12:32:53.152626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.622 [2024-11-04 12:32:53.166052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.622 [2024-11-04 12:32:53.166070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.622 [2024-11-04 12:32:53.166076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.622 [2024-11-04 12:32:53.180607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.622 [2024-11-04 12:32:53.180625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.622 [2024-11-04 12:32:53.180631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.194255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.194272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.194279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.203310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.203328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.203335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.218541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.218559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.218566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.231955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.231972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.231979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.244180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.244198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.244204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.256040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.256058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.256064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.269756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.269776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.269783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.281533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.281551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.281557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.294303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.294321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.294328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.307306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.307324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.307331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.322026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.322044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.322051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.332812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.332829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.332836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.344391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.344409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.344416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.357535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.357553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.357560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.370871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.370888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.370895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.383221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.383238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.383244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.394899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.394916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.394922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.407378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.407395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.407401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.420830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.420847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.420853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.433311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.884 [2024-11-04 12:32:53.433329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.884 [2024-11-04 12:32:53.433335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.884 [2024-11-04 12:32:53.445060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:18.885 [2024-11-04 12:32:53.445077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.885 [2024-11-04 12:32:53.445083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.151 [2024-11-04 12:32:53.458003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.151 [2024-11-04 12:32:53.458021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.151 [2024-11-04 12:32:53.458028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.151 [2024-11-04 12:32:53.468734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.151 [2024-11-04 12:32:53.468755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.151 [2024-11-04 12:32:53.468762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.151 [2024-11-04 12:32:53.483697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.151 [2024-11-04 12:32:53.483715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.151 [2024-11-04 12:32:53.483724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.151 [2024-11-04 12:32:53.495424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.151 [2024-11-04 12:32:53.495441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.151 [2024-11-04 12:32:53.495448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.151 [2024-11-04 12:32:53.507893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.151 [2024-11-04 12:32:53.507910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.151 [2024-11-04 12:32:53.507917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.151 [2024-11-04 12:32:53.520666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.151 [2024-11-04 12:32:53.520684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.151 [2024-11-04 12:32:53.520690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.151 [2024-11-04 12:32:53.534121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.151 [2024-11-04 12:32:53.534139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.151 [2024-11-04 12:32:53.534145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.151 [2024-11-04 12:32:53.545648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.151 [2024-11-04 12:32:53.545665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.151 [2024-11-04 12:32:53.545672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.151 [2024-11-04 12:32:53.558499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.151 [2024-11-04 12:32:53.558516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.151 [2024-11-04 12:32:53.558522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.151 [2024-11-04 12:32:53.568528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.151 [2024-11-04 12:32:53.568546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.151 [2024-11-04 12:32:53.568552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.151 [2024-11-04 12:32:53.581412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.151 [2024-11-04 12:32:53.581430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.151 [2024-11-04 12:32:53.581436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.151 [2024-11-04 12:32:53.594846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.151 [2024-11-04 12:32:53.594866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.151 [2024-11-04 12:32:53.594873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.151 [2024-11-04 12:32:53.608876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.151 [2024-11-04 12:32:53.608893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.151 [2024-11-04 12:32:53.608900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.151 [2024-11-04 12:32:53.622229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.151 [2024-11-04 12:32:53.622247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.151 [2024-11-04 12:32:53.622254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.151 [2024-11-04 12:32:53.635144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.151 [2024-11-04 12:32:53.635162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.151 [2024-11-04 12:32:53.635169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.151 [2024-11-04 12:32:53.647623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.151 [2024-11-04 12:32:53.647641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.151 [2024-11-04 12:32:53.647648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.151 [2024-11-04 12:32:53.659512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.151 [2024-11-04 12:32:53.659530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.151 [2024-11-04 12:32:53.659537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.151 [2024-11-04 12:32:53.671464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.151 [2024-11-04 12:32:53.671482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-11-04 12:32:53.671489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.152 [2024-11-04 12:32:53.682529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.152 [2024-11-04 12:32:53.682547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-11-04 12:32:53.682554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.152 [2024-11-04 12:32:53.696357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.152 [2024-11-04 12:32:53.696375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-11-04 12:32:53.696388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.152 [2024-11-04 12:32:53.709782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.152 [2024-11-04 12:32:53.709799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.152 [2024-11-04 12:32:53.709806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.454 [2024-11-04 12:32:53.722340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.454 [2024-11-04 12:32:53.722359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.454 [2024-11-04 12:32:53.722365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.454 [2024-11-04 12:32:53.734812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.454 [2024-11-04 12:32:53.734830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.454 [2024-11-04 12:32:53.734837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.454 [2024-11-04 12:32:53.747087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.454 [2024-11-04 12:32:53.747105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.454 [2024-11-04 12:32:53.747112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.454 [2024-11-04 12:32:53.757122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.454 [2024-11-04 12:32:53.757139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.454 [2024-11-04 12:32:53.757147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.454 [2024-11-04 12:32:53.770546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.454 [2024-11-04 12:32:53.770563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.454 [2024-11-04 12:32:53.770570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.454 [2024-11-04 12:32:53.783731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.454 [2024-11-04 12:32:53.783754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.454 [2024-11-04 12:32:53.783761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.454 [2024-11-04 12:32:53.796450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.454 [2024-11-04 12:32:53.796468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.454 [2024-11-04 12:32:53.796475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.454 [2024-11-04 12:32:53.808865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.454 [2024-11-04 12:32:53.808887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.454 [2024-11-04 12:32:53.808893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.454 [2024-11-04 12:32:53.821532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.454 [2024-11-04 12:32:53.821549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.454 [2024-11-04 12:32:53.821556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.454 [2024-11-04 12:32:53.834393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.454 [2024-11-04 12:32:53.834412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.454 [2024-11-04 12:32:53.834418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.454 [2024-11-04 12:32:53.846806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.454 [2024-11-04 12:32:53.846824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.454 [2024-11-04 12:32:53.846831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.454 [2024-11-04 12:32:53.859033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.454 [2024-11-04 12:32:53.859052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.454 [2024-11-04 12:32:53.859059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.454 [2024-11-04 12:32:53.872619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.454 [2024-11-04 12:32:53.872638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.454 [2024-11-04 12:32:53.872644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.454 [2024-11-04 12:32:53.882933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.454 [2024-11-04 12:32:53.882950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.454 [2024-11-04 12:32:53.882957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.454 [2024-11-04 12:32:53.896168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.454 [2024-11-04 12:32:53.896186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.454 [2024-11-04 12:32:53.896192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.454 [2024-11-04 12:32:53.908469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.454 [2024-11-04 12:32:53.908486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.454 [2024-11-04 12:32:53.908493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.454 [2024-11-04 12:32:53.921220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.454 [2024-11-04 12:32:53.921238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.454 [2024-11-04 12:32:53.921245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.454 [2024-11-04 12:32:53.934192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.454 [2024-11-04 12:32:53.934210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.454 [2024-11-04 12:32:53.934217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.454 [2024-11-04 12:32:53.947082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.454 [2024-11-04 12:32:53.947099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.455 [2024-11-04 12:32:53.947105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.455 [2024-11-04 12:32:53.959689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.455 [2024-11-04 12:32:53.959706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.455 [2024-11-04 12:32:53.959713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.455 20247.50 IOPS, 79.09 MiB/s [2024-11-04T11:32:54.025Z] [2024-11-04 12:32:53.971980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd15470) 00:28:19.455 [2024-11-04 12:32:53.971995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.455 [2024-11-04 12:32:53.972002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.455 00:28:19.455 Latency(us) 00:28:19.455 [2024-11-04T11:32:54.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.455 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:19.455 nvme0n1 : 2.00 20275.45 79.20 0.00 0.00 6306.68 2471.25 22173.01 00:28:19.455 [2024-11-04T11:32:54.025Z] =================================================================================================================== 00:28:19.455 [2024-11-04T11:32:54.025Z] Total : 20275.45 79.20 0.00 0.00 6306.68 2471.25 22173.01 00:28:19.455 { 00:28:19.455 "results": [ 00:28:19.455 { 00:28:19.455 "job": "nvme0n1", 00:28:19.455 "core_mask": "0x2", 00:28:19.455 "workload": "randread", 00:28:19.455 "status": "finished", 00:28:19.455 "queue_depth": 128, 00:28:19.455 "io_size": 4096, 00:28:19.455 "runtime": 2.003556, 00:28:19.455 "iops": 20275.450249456466, 00:28:19.455 "mibps": 79.20097753693932, 00:28:19.455 "io_failed": 0, 00:28:19.455 "io_timeout": 0, 00:28:19.455 "avg_latency_us": 6306.684997169091, 00:28:19.455 "min_latency_us": 2471.2533333333336, 00:28:19.455 "max_latency_us": 22173.013333333332 00:28:19.455 } 00:28:19.455 ], 00:28:19.455 "core_count": 1 00:28:19.455 } 00:28:19.726 12:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:19.726 12:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:19.726 12:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:19.726 | .driver_specific 00:28:19.726 | .nvme_error 00:28:19.726 | .status_code 00:28:19.726 | .command_transient_transport_error' 00:28:19.726 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:19.726 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:28:19.726 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1813243 00:28:19.726 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1813243 ']' 00:28:19.726 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1813243 00:28:19.726 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:19.726 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:19.726 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1813243 00:28:19.726 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:19.726 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:19.726 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1813243' 00:28:19.726 killing process with pid 1813243 00:28:19.726 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1813243 00:28:19.726 Received shutdown signal, test time was about 2.000000 seconds 00:28:19.726 00:28:19.726 Latency(us) 00:28:19.726 [2024-11-04T11:32:54.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.726 [2024-11-04T11:32:54.296Z] =================================================================================================================== 00:28:19.726 [2024-11-04T11:32:54.296Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:19.726 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1813243 00:28:19.990 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:19.990 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:19.990 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:19.990 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:19.990 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:19.990 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1813838 00:28:19.990 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1813838 /var/tmp/bperf.sock 00:28:19.990 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1813838 ']' 00:28:19.990 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:19.990 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:19.990 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:19.990 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:19.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:19.990 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:19.991 12:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:19.991 [2024-11-04 12:32:54.418161] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:28:19.991 [2024-11-04 12:32:54.418220] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1813838 ] 00:28:19.991 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:19.991 Zero copy mechanism will not be used. 00:28:19.991 [2024-11-04 12:32:54.491711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.991 [2024-11-04 12:32:54.521192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.933 12:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:20.933 12:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:20.933 12:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:20.933 12:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:20.933 12:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:20.933 12:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.933 12:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.933 12:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.933 12:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:20.933 12:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.507 nvme0n1 00:28:21.507 12:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:21.507 12:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.507 12:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.507 12:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.507 12:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:21.507 12:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:21.507 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:21.507 Zero copy mechanism will not be used. 00:28:21.507 Running I/O for 2 seconds... 00:28:21.507 [2024-11-04 12:32:55.940112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.507 [2024-11-04 12:32:55.940145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.507 [2024-11-04 12:32:55.940154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.507 [2024-11-04 12:32:55.951211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.507 [2024-11-04 12:32:55.951232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.507 [2024-11-04 12:32:55.951239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.507 [2024-11-04 12:32:55.962379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.507 [2024-11-04 12:32:55.962399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.507 [2024-11-04 12:32:55.962413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.507 [2024-11-04 12:32:55.971496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.507 [2024-11-04 12:32:55.971515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.507 [2024-11-04 12:32:55.971522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.507 [2024-11-04 12:32:55.982913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.507 [2024-11-04 12:32:55.982932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.507 [2024-11-04 12:32:55.982938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.507 [2024-11-04 12:32:55.991805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.507 [2024-11-04 12:32:55.991823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.507 [2024-11-04 12:32:55.991829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.507 [2024-11-04 12:32:56.000661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.507 [2024-11-04 12:32:56.000680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.507 [2024-11-04 12:32:56.000687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.507 [2024-11-04 12:32:56.011087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.507 [2024-11-04 12:32:56.011106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.507 [2024-11-04 12:32:56.011112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.507 [2024-11-04 12:32:56.021351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.507 [2024-11-04 12:32:56.021370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.507 [2024-11-04 12:32:56.021376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.507 [2024-11-04 12:32:56.031566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.507 [2024-11-04 12:32:56.031584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.507 [2024-11-04 12:32:56.031591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.507 [2024-11-04 12:32:56.039940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.507 [2024-11-04 12:32:56.039959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.507 [2024-11-04 12:32:56.039965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.507 [2024-11-04 12:32:56.049885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.507 [2024-11-04 12:32:56.049904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.507 [2024-11-04 12:32:56.049911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.507 [2024-11-04 12:32:56.060281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.507 [2024-11-04 12:32:56.060300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.507 [2024-11-04 12:32:56.060307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.507 [2024-11-04 12:32:56.070299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.507 [2024-11-04 12:32:56.070318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.507 [2024-11-04 12:32:56.070325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.770 [2024-11-04 12:32:56.079928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.770 [2024-11-04 12:32:56.079947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.770 [2024-11-04 12:32:56.079954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.770 [2024-11-04 12:32:56.090530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.770 [2024-11-04 12:32:56.090548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.770 [2024-11-04 12:32:56.090555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.770 [2024-11-04 12:32:56.099182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.770 [2024-11-04 12:32:56.099202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.770 [2024-11-04 12:32:56.099208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.770 [2024-11-04 12:32:56.110068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.770 [2024-11-04 12:32:56.110086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.770 [2024-11-04 12:32:56.110093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.770 [2024-11-04 12:32:56.117626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.770 [2024-11-04 12:32:56.117645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.770 [2024-11-04 12:32:56.117652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.770 [2024-11-04 12:32:56.128304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.770 [2024-11-04 12:32:56.128323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.771 [2024-11-04 12:32:56.128333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.771 [2024-11-04 12:32:56.138868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.771 [2024-11-04 12:32:56.138887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.771 [2024-11-04 12:32:56.138893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.771 [2024-11-04 12:32:56.148125] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.771 [2024-11-04 12:32:56.148143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.771 [2024-11-04 12:32:56.148150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.771 [2024-11-04 12:32:56.159390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.771 [2024-11-04 12:32:56.159409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.771 [2024-11-04 12:32:56.159415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.771 [2024-11-04 12:32:56.168842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.771 [2024-11-04 12:32:56.168861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.771 [2024-11-04 12:32:56.168868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.771 [2024-11-04 12:32:56.180805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.771 [2024-11-04 12:32:56.180824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.771 [2024-11-04 12:32:56.180830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.771 [2024-11-04 12:32:56.192938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.771 [2024-11-04 12:32:56.192956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.771 [2024-11-04 12:32:56.192963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.771 [2024-11-04 12:32:56.206068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.771 [2024-11-04 12:32:56.206087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.771 [2024-11-04 12:32:56.206093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.771 [2024-11-04 12:32:56.217186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.771 [2024-11-04 12:32:56.217204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.771 [2024-11-04 12:32:56.217210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.771 [2024-11-04 12:32:56.229378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.771 [2024-11-04 12:32:56.229399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.771 [2024-11-04 12:32:56.229406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.771 [2024-11-04 12:32:56.240984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.771 [2024-11-04 12:32:56.241003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.771 [2024-11-04 12:32:56.241009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.771 [2024-11-04 12:32:56.253727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.771 [2024-11-04 12:32:56.253750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.771 [2024-11-04 12:32:56.253757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.771 [2024-11-04 12:32:56.265456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.771 [2024-11-04 12:32:56.265474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.771 [2024-11-04 12:32:56.265480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.771 [2024-11-04 12:32:56.277952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.771 [2024-11-04 12:32:56.277970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.771 [2024-11-04 12:32:56.277977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.771 [2024-11-04 12:32:56.288573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.771 [2024-11-04 12:32:56.288591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.771 [2024-11-04 12:32:56.288597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.771 [2024-11-04 12:32:56.300495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.771 [2024-11-04 12:32:56.300513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.771 [2024-11-04 12:32:56.300520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.771 [2024-11-04 12:32:56.311316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.771 [2024-11-04 12:32:56.311334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.771 [2024-11-04 12:32:56.311340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.771 [2024-11-04 12:32:56.322812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.771 [2024-11-04 12:32:56.322837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.771 [2024-11-04 12:32:56.322844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.771 [2024-11-04 12:32:56.333337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:21.771 [2024-11-04 12:32:56.333355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.771 [2024-11-04 12:32:56.333362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.344634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.344653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.344659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.355698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.355716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.355722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.365937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.365955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.365961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.375132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.375150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.375157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.383936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.383954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.383960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.390712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.390730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.390736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.397902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.397921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.397928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.407925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.407943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.407952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.418696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.418714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.418720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.428973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.428991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.428998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.439595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.439613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.439620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.449637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.449655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.449661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.459851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.459868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.459875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.471818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.471836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.471843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.485387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.485404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.485411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.496391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.496410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.496416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.509377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.509398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.509405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.522099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.522117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.522124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.535455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.535473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.535480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.546863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.546881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.546887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.559954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.559972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.559978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.572508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.572526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.572532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.585638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.585656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.585662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.033 [2024-11-04 12:32:56.597867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.033 [2024-11-04 12:32:56.597884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.033 [2024-11-04 12:32:56.597891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.295 [2024-11-04 12:32:56.610139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.295 [2024-11-04 12:32:56.610157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.295 [2024-11-04 12:32:56.610163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.295 [2024-11-04 12:32:56.622588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.295 [2024-11-04 12:32:56.622606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.295 [2024-11-04 12:32:56.622613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.295 [2024-11-04 12:32:56.635071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.295 [2024-11-04 12:32:56.635089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.295 [2024-11-04 12:32:56.635096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.295 [2024-11-04 12:32:56.641500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.295 [2024-11-04 12:32:56.641517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.295 [2024-11-04 12:32:56.641524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.295 [2024-11-04 12:32:56.650145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.295 [2024-11-04 12:32:56.650163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.295 [2024-11-04 12:32:56.650169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.295 [2024-11-04 12:32:56.660339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.295 [2024-11-04 12:32:56.660357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.295 [2024-11-04 12:32:56.660363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.295 [2024-11-04 12:32:56.668588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.668605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.668612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.678315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.678333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.678340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.688167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.688185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.688191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.700260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.700280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.700287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.708552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.708570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.708577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.718172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.718190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.718197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.727477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.727495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.727502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.738272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.738290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.738296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.745163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.745181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.745187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.755421] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.755439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.755445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.762286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.762304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.762310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.772514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.772532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.772538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.782424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.782442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.782449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.793461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.793479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.793485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.799789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.799807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.799814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.806887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.806906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.806913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.816560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.816579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.816586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.825453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.825473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.825479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.835360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.835379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.835385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.843443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.843461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.843468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.853386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.853404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.853414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.296 [2024-11-04 12:32:56.861367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.296 [2024-11-04 12:32:56.861386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.296 [2024-11-04 12:32:56.861393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.558 [2024-11-04 12:32:56.871901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.558 [2024-11-04 12:32:56.871920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.558 [2024-11-04 12:32:56.871926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.558 [2024-11-04 12:32:56.884368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.558 [2024-11-04 12:32:56.884387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.558 [2024-11-04 12:32:56.884393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.558 [2024-11-04 12:32:56.894785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.558 [2024-11-04 12:32:56.894804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.558 [2024-11-04 12:32:56.894810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.558 [2024-11-04 12:32:56.907386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.558 [2024-11-04 12:32:56.907404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.558 [2024-11-04 12:32:56.907411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.558 [2024-11-04 12:32:56.920622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.558 [2024-11-04 12:32:56.920641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.558 [2024-11-04 12:32:56.920647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.558 2968.00 IOPS, 371.00 MiB/s [2024-11-04T11:32:57.128Z] [2024-11-04 12:32:56.934145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.558 [2024-11-04 12:32:56.934164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.558 [2024-11-04 12:32:56.934171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.558 [2024-11-04 12:32:56.943876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.558 [2024-11-04 12:32:56.943895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.558 [2024-11-04 12:32:56.943902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.558 [2024-11-04 12:32:56.955873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.558 [2024-11-04 12:32:56.955896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.558 [2024-11-04 12:32:56.955902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.558 [2024-11-04 12:32:56.967583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.558 [2024-11-04 12:32:56.967602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.558 [2024-11-04 12:32:56.967608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.558 [2024-11-04 12:32:56.977519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.558 [2024-11-04 12:32:56.977538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.558 [2024-11-04 12:32:56.977544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.558 [2024-11-04 12:32:56.985583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.558 [2024-11-04 12:32:56.985602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.558 [2024-11-04 12:32:56.985609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.558 [2024-11-04 12:32:56.996512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.558 [2024-11-04 12:32:56.996531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.558 [2024-11-04 12:32:56.996538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.558 [2024-11-04 12:32:57.005685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.558 [2024-11-04 12:32:57.005704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.558 [2024-11-04 12:32:57.005710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.558 [2024-11-04 12:32:57.017760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.558 [2024-11-04 12:32:57.017779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.558 [2024-11-04 12:32:57.017785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.558 [2024-11-04 12:32:57.030833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.558 [2024-11-04 12:32:57.030852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.558 [2024-11-04 12:32:57.030859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.558 [2024-11-04 12:32:57.040372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.558 [2024-11-04 12:32:57.040391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.558 [2024-11-04 12:32:57.040397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.558 [2024-11-04 12:32:57.052389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.558 [2024-11-04 12:32:57.052408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.558 [2024-11-04 12:32:57.052414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.558 [2024-11-04 12:32:57.062409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.558 [2024-11-04 12:32:57.062428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.558 [2024-11-04 12:32:57.062434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.558 [2024-11-04 12:32:57.070778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.558 [2024-11-04 12:32:57.070797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.558 [2024-11-04 12:32:57.070803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.559 [2024-11-04 12:32:57.080352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.559 [2024-11-04 12:32:57.080371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.559 [2024-11-04 12:32:57.080377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.559 [2024-11-04 12:32:57.086713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.559 [2024-11-04 12:32:57.086731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.559 [2024-11-04 12:32:57.086738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.559 [2024-11-04 12:32:57.097003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.559 [2024-11-04 12:32:57.097022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.559 [2024-11-04 12:32:57.097029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.559 [2024-11-04 12:32:57.109227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.559 [2024-11-04 12:32:57.109246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.559 [2024-11-04 12:32:57.109253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.559 [2024-11-04 12:32:57.118852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.559 [2024-11-04 12:32:57.118870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.559 [2024-11-04 12:32:57.118878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.821 [2024-11-04 12:32:57.130732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.821 [2024-11-04 12:32:57.130757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.821 [2024-11-04 12:32:57.130767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.821 [2024-11-04 12:32:57.136830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.821 [2024-11-04 12:32:57.136849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.821 [2024-11-04 12:32:57.136855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.821 [2024-11-04 12:32:57.146731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.821 [2024-11-04 12:32:57.146754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.821 [2024-11-04 12:32:57.146761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.821 [2024-11-04 12:32:57.156623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.821 [2024-11-04 12:32:57.156642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.821 [2024-11-04 12:32:57.156648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.821 [2024-11-04 12:32:57.166910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.821 [2024-11-04 12:32:57.166928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.821 [2024-11-04 12:32:57.166935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.821 [2024-11-04 12:32:57.176163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.821 [2024-11-04 12:32:57.176181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.821 [2024-11-04 12:32:57.176188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.821 [2024-11-04 12:32:57.185872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.821 [2024-11-04 12:32:57.185891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.821 [2024-11-04 12:32:57.185898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.821 [2024-11-04 12:32:57.195559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.821 [2024-11-04 12:32:57.195578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.821 [2024-11-04 12:32:57.195584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.821 [2024-11-04 12:32:57.207690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.821 [2024-11-04 12:32:57.207709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.821 [2024-11-04 12:32:57.207715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.821 [2024-11-04 12:32:57.220412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.821 [2024-11-04 12:32:57.220434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.821 [2024-11-04 12:32:57.220441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.821 [2024-11-04 12:32:57.233139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.821 [2024-11-04 12:32:57.233158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.821 [2024-11-04 12:32:57.233164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.821 [2024-11-04 12:32:57.241255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.821 [2024-11-04 12:32:57.241274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.821 [2024-11-04 12:32:57.241280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.821 [2024-11-04 12:32:57.250034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.821 [2024-11-04 12:32:57.250053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.821 [2024-11-04 12:32:57.250059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.821 [2024-11-04 12:32:57.258340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.821 [2024-11-04 12:32:57.258359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.821 [2024-11-04 12:32:57.258365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.821 [2024-11-04 12:32:57.270107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.821 [2024-11-04 12:32:57.270126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.821 [2024-11-04 12:32:57.270132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.821 [2024-11-04 12:32:57.281408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.821 [2024-11-04 12:32:57.281427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.821 [2024-11-04 12:32:57.281434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.821 [2024-11-04 12:32:57.293332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.821 [2024-11-04 12:32:57.293351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.821 [2024-11-04 12:32:57.293358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.821 [2024-11-04 12:32:57.305514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.821 [2024-11-04 12:32:57.305534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.821 [2024-11-04 12:32:57.305540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.821 [2024-11-04 12:32:57.316238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.821 [2024-11-04 12:32:57.316258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.821 [2024-11-04 12:32:57.316265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.822 [2024-11-04 12:32:57.326504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.822 [2024-11-04 12:32:57.326523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.822 [2024-11-04 12:32:57.326530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.822 [2024-11-04 12:32:57.336037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.822 [2024-11-04 12:32:57.336056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.822 [2024-11-04 12:32:57.336062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.822 [2024-11-04 12:32:57.346978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.822 [2024-11-04 12:32:57.346998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.822 [2024-11-04 12:32:57.347004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.822 [2024-11-04 12:32:57.356780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.822 [2024-11-04 12:32:57.356799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.822 [2024-11-04 12:32:57.356805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.822 [2024-11-04 12:32:57.367244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.822 [2024-11-04 12:32:57.367263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.822 [2024-11-04 12:32:57.367270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.822 [2024-11-04 12:32:57.377988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:22.822 [2024-11-04 12:32:57.378007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.822 [2024-11-04 12:32:57.378014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.083 [2024-11-04 12:32:57.389539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.389558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.389565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.400531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.400549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.400559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.413056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.413076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.413082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.424418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.424438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.424445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.437635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.437654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.437661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.450593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.450612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.450618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.463740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.463766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.463772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.477094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.477112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.477119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.488225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.488244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.488251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.500017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.500037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.500044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.512443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.512463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.512469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.524929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.524947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.524954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.536712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.536731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.536738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.546560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.546579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.546586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.557655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.557674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.557680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.569045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.569064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.569071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.581327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.581345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.581352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.593956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.593975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.593982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.607209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.607228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.607238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.620084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.620103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.620109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.630601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.630620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.630627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.641273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.641292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.641299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.084 [2024-11-04 12:32:57.651249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.084 [2024-11-04 12:32:57.651268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.084 [2024-11-04 12:32:57.651275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.346 [2024-11-04 12:32:57.662984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.346 [2024-11-04 12:32:57.663004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-11-04 12:32:57.663010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.346 [2024-11-04 12:32:57.671231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.346 [2024-11-04 12:32:57.671249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-11-04 12:32:57.671256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.346 [2024-11-04 12:32:57.675768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.346 [2024-11-04 12:32:57.675786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-11-04 12:32:57.675792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.346 [2024-11-04 12:32:57.682704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.346 [2024-11-04 12:32:57.682723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-11-04 12:32:57.682729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.346 [2024-11-04 12:32:57.692573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.346 [2024-11-04 12:32:57.692596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-11-04 12:32:57.692603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.346 [2024-11-04 12:32:57.703122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.346 [2024-11-04 12:32:57.703141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-11-04 12:32:57.703148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.346 [2024-11-04 12:32:57.714278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.346 [2024-11-04 12:32:57.714296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-11-04 12:32:57.714303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.346 [2024-11-04 12:32:57.724411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.346 [2024-11-04 12:32:57.724430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-11-04 12:32:57.724437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.346 [2024-11-04 12:32:57.736475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.346 [2024-11-04 12:32:57.736495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-11-04 12:32:57.736501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.346 [2024-11-04 12:32:57.747834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.346 [2024-11-04 12:32:57.747853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-11-04 12:32:57.747859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.346 [2024-11-04 12:32:57.758270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.346 [2024-11-04 12:32:57.758289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-11-04 12:32:57.758295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.346 [2024-11-04 12:32:57.771449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.346 [2024-11-04 12:32:57.771469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-11-04 12:32:57.771475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.346 [2024-11-04 12:32:57.781360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.346 [2024-11-04 12:32:57.781380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-11-04 12:32:57.781387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.346 [2024-11-04 12:32:57.792209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.346 [2024-11-04 12:32:57.792229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-11-04 12:32:57.792236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.346 [2024-11-04 12:32:57.804498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.346 [2024-11-04 12:32:57.804517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-11-04 12:32:57.804524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.346 [2024-11-04 12:32:57.816890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.346 [2024-11-04 12:32:57.816909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-11-04 12:32:57.816916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.346 [2024-11-04 12:32:57.828975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.346 [2024-11-04 12:32:57.828995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-11-04 12:32:57.829001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.347 [2024-11-04 12:32:57.839045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.347 [2024-11-04 12:32:57.839064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.347 [2024-11-04 12:32:57.839071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.347 [2024-11-04 12:32:57.845820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.347 [2024-11-04 12:32:57.845839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.347 [2024-11-04 12:32:57.845846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.347 [2024-11-04 12:32:57.856190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.347 [2024-11-04 12:32:57.856210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.347 [2024-11-04 12:32:57.856217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.347 [2024-11-04 12:32:57.865606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.347 [2024-11-04 12:32:57.865625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.347 [2024-11-04 12:32:57.865631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.347 [2024-11-04 12:32:57.877027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.347 [2024-11-04 12:32:57.877047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.347 [2024-11-04 12:32:57.877056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.347 [2024-11-04 12:32:57.889132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.347 [2024-11-04 12:32:57.889151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.347 [2024-11-04 12:32:57.889157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.347 [2024-11-04 12:32:57.901509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.347 [2024-11-04 12:32:57.901528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.347 [2024-11-04 12:32:57.901535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.607 [2024-11-04 12:32:57.914258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.607 [2024-11-04 12:32:57.914278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.607 [2024-11-04 12:32:57.914286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.607 [2024-11-04 12:32:57.926593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.607 [2024-11-04 12:32:57.926611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.607 [2024-11-04 12:32:57.926618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.607 2929.00 IOPS, 366.12 MiB/s [2024-11-04T11:32:58.177Z] [2024-11-04 12:32:57.938658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe11900) 00:28:23.607 [2024-11-04 12:32:57.938677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.607 [2024-11-04 12:32:57.938684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.607 00:28:23.607 Latency(us) 00:28:23.607 [2024-11-04T11:32:58.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.607 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:23.607 nvme0n1 : 2.01 2927.15 365.89 0.00 0.00 5461.26 795.31 13598.72 00:28:23.607 [2024-11-04T11:32:58.177Z] =================================================================================================================== 00:28:23.607 [2024-11-04T11:32:58.177Z] Total : 2927.15 365.89 0.00 0.00 5461.26 795.31 13598.72 00:28:23.608 { 00:28:23.608 "results": [ 00:28:23.608 { 00:28:23.608 "job": "nvme0n1", 00:28:23.608 "core_mask": "0x2", 00:28:23.608 "workload": "randread", 00:28:23.608 "status": "finished", 00:28:23.608 "queue_depth": 16, 00:28:23.608 "io_size": 131072, 00:28:23.608 "runtime": 2.006733, 00:28:23.608 "iops": 2927.1457637862136, 00:28:23.608 "mibps": 365.8932204732767, 00:28:23.608 "io_failed": 0, 00:28:23.608 "io_timeout": 0, 00:28:23.608 "avg_latency_us": 5461.263892861196, 00:28:23.608 "min_latency_us": 795.3066666666666, 00:28:23.608 "max_latency_us": 13598.72 00:28:23.608 } 00:28:23.608 ], 00:28:23.608 "core_count": 1 00:28:23.608 } 00:28:23.608 12:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:23.608 12:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:23.608 12:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:23.608 | .driver_specific 00:28:23.608 | .nvme_error 00:28:23.608 | .status_code 00:28:23.608 | .command_transient_transport_error' 00:28:23.608 12:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:23.608 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 189 > 0 )) 00:28:23.608 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1813838 00:28:23.608 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1813838 ']' 00:28:23.608 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1813838 00:28:23.608 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:23.608 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:23.608 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1813838 00:28:23.869 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:23.869 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:23.869 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1813838' 00:28:23.869 killing process with pid 1813838 00:28:23.869 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1813838 00:28:23.869 Received shutdown signal, test time was about 2.000000 seconds 00:28:23.869 00:28:23.869 Latency(us) 00:28:23.869 [2024-11-04T11:32:58.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.869 [2024-11-04T11:32:58.439Z] =================================================================================================================== 00:28:23.869 [2024-11-04T11:32:58.439Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:23.869 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1813838 00:28:23.869 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:23.869 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:23.869 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:23.869 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:23.869 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:23.869 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1814607 00:28:23.869 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1814607 /var/tmp/bperf.sock 00:28:23.869 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1814607 ']' 00:28:23.869 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:23.869 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:23.869 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:23.869 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:23.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:23.869 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:23.869 12:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:23.869 [2024-11-04 12:32:58.390515] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:28:23.869 [2024-11-04 12:32:58.390577] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1814607 ] 00:28:24.130 [2024-11-04 12:32:58.465861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.130 [2024-11-04 12:32:58.494310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.701 12:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:24.701 12:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:24.701 12:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:24.701 12:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:24.961 12:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:24.961 12:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.961 12:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.961 12:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.961 12:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:24.962 12:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.222 nvme0n1 00:28:25.222 12:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:25.222 12:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.222 12:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.222 12:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.222 12:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:25.222 12:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:25.484 Running I/O for 2 seconds... 00:28:25.484 [2024-11-04 12:32:59.826608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166eb760 00:28:25.484 [2024-11-04 12:32:59.827426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.484 [2024-11-04 12:32:59.827455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:25.484 [2024-11-04 12:32:59.838395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e0ea0 00:28:25.484 [2024-11-04 12:32:59.839344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.484 [2024-11-04 12:32:59.839363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:25.484 [2024-11-04 12:32:59.851928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fc560 00:28:25.484 [2024-11-04 12:32:59.853526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.484 [2024-11-04 12:32:59.853549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:25.484 [2024-11-04 12:32:59.862713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e6b70 00:28:25.484 [2024-11-04 12:32:59.863812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.484 [2024-11-04 12:32:59.863829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:25.485 [2024-11-04 12:32:59.874848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e9e10 00:28:25.485 [2024-11-04 12:32:59.875952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.485 [2024-11-04 12:32:59.875969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:25.485 [2024-11-04 12:32:59.886797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e0630 00:28:25.485 [2024-11-04 12:32:59.887948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.485 [2024-11-04 12:32:59.887964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:25.485 [2024-11-04 12:32:59.898772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fbcf0 00:28:25.485 [2024-11-04 12:32:59.899898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.485 [2024-11-04 12:32:59.899914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:25.485 [2024-11-04 12:32:59.910612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e9e10 00:28:25.485 [2024-11-04 12:32:59.911726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.485 [2024-11-04 12:32:59.911742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:25.485 [2024-11-04 12:32:59.922531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e9e10 00:28:25.485 [2024-11-04 12:32:59.923640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.485 [2024-11-04 12:32:59.923657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:25.485 [2024-11-04 12:32:59.934455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e9e10 00:28:25.485 [2024-11-04 12:32:59.935565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.485 [2024-11-04 12:32:59.935582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:25.485 [2024-11-04 12:32:59.946353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e9e10 00:28:25.485 [2024-11-04 12:32:59.947462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.485 [2024-11-04 12:32:59.947479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:25.485 [2024-11-04 12:32:59.958254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e9e10 00:28:25.485 [2024-11-04 12:32:59.959366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.485 [2024-11-04 12:32:59.959382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:25.485 [2024-11-04 12:32:59.970138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e9e10 00:28:25.485 [2024-11-04 12:32:59.971249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.485 [2024-11-04 12:32:59.971265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:25.485 [2024-11-04 12:32:59.982019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e9e10 00:28:25.485 [2024-11-04 12:32:59.983128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.485 [2024-11-04 12:32:59.983145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:25.485 [2024-11-04 12:32:59.993904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e9e10 00:28:25.485 [2024-11-04 12:32:59.995012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.485 [2024-11-04 12:32:59.995028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:25.485 [2024-11-04 12:33:00.007786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e9e10 00:28:25.485 [2024-11-04 12:33:00.009527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.485 [2024-11-04 12:33:00.009544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:25.485 [2024-11-04 12:33:00.018860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e5220 00:28:25.485 [2024-11-04 12:33:00.020246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.485 [2024-11-04 12:33:00.020263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:25.485 [2024-11-04 12:33:00.031106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166ec408 00:28:25.485 [2024-11-04 12:33:00.032480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.485 [2024-11-04 12:33:00.032497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:25.485 [2024-11-04 12:33:00.043043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e27f0 00:28:25.485 [2024-11-04 12:33:00.044401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.485 [2024-11-04 12:33:00.044418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:25.746 [2024-11-04 12:33:00.056523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e38d0 00:28:25.746 [2024-11-04 12:33:00.058556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.058573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.066879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f4f40 00:28:25.747 [2024-11-04 12:33:00.068264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.068280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.078774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f4f40 00:28:25.747 [2024-11-04 12:33:00.080156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.080172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.090835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f4f40 00:28:25.747 [2024-11-04 12:33:00.092208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.092225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.102720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f4f40 00:28:25.747 [2024-11-04 12:33:00.104091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.104107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.114610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f4f40 00:28:25.747 [2024-11-04 12:33:00.116002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.116018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.126514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f4f40 00:28:25.747 [2024-11-04 12:33:00.127867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.127883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.138416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e1f80 00:28:25.747 [2024-11-04 12:33:00.139780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.139796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.150384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f6020 00:28:25.747 [2024-11-04 12:33:00.151741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.151761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.163859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:25.747 [2024-11-04 12:33:00.165878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.165898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.174271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f9f68 00:28:25.747 [2024-11-04 12:33:00.175655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.175674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.186191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fb048 00:28:25.747 [2024-11-04 12:33:00.187574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.187590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.198102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166ebfd0 00:28:25.747 [2024-11-04 12:33:00.199471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.199489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.209982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e7c50 00:28:25.747 [2024-11-04 12:33:00.211360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.211376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.221902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e7c50 00:28:25.747 [2024-11-04 12:33:00.223246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.223263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.233918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e7c50 00:28:25.747 [2024-11-04 12:33:00.235291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.235307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.245810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e7c50 00:28:25.747 [2024-11-04 12:33:00.247173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.247191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.257681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e7c50 00:28:25.747 [2024-11-04 12:33:00.259044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.259061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.269567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e7c50 00:28:25.747 [2024-11-04 12:33:00.270944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.270963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.282977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e7c50 00:28:25.747 [2024-11-04 12:33:00.284955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.284972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.293349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166eb760 00:28:25.747 [2024-11-04 12:33:00.294723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.294739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.304650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f0bc0 00:28:25.747 [2024-11-04 12:33:00.305986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.747 [2024-11-04 12:33:00.306003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:25.747 [2024-11-04 12:33:00.314590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f9b30 00:28:26.009 [2024-11-04 12:33:00.315457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.315473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.327228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f9b30 00:28:26.009 [2024-11-04 12:33:00.328103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.328121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.339140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f9b30 00:28:26.009 [2024-11-04 12:33:00.339996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.340013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.351035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f9b30 00:28:26.009 [2024-11-04 12:33:00.351907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.351924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.362118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166dfdc0 00:28:26.009 [2024-11-04 12:33:00.362974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.362991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.376909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e23b8 00:28:26.009 [2024-11-04 12:33:00.378579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.378595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.387241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f5be8 00:28:26.009 [2024-11-04 12:33:00.388271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.388287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.399134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f5be8 00:28:26.009 [2024-11-04 12:33:00.400171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.400188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.411031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f5be8 00:28:26.009 [2024-11-04 12:33:00.412052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.412068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.422858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e1b48 00:28:26.009 [2024-11-04 12:33:00.423830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.423847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.434750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e1b48 00:28:26.009 [2024-11-04 12:33:00.435760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.435776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.446626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e1b48 00:28:26.009 [2024-11-04 12:33:00.447643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.447660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.458532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e1b48 00:28:26.009 [2024-11-04 12:33:00.459548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.459564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.470422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e1b48 00:28:26.009 [2024-11-04 12:33:00.471448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.471464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.482537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e1b48 00:28:26.009 [2024-11-04 12:33:00.483558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.483574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.494437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e1b48 00:28:26.009 [2024-11-04 12:33:00.495457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.495474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.506337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e1b48 00:28:26.009 [2024-11-04 12:33:00.507326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.507343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.518182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f6458 00:28:26.009 [2024-11-04 12:33:00.519192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.519208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.530095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f6458 00:28:26.009 [2024-11-04 12:33:00.531105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.531122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.542016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166ebfd0 00:28:26.009 [2024-11-04 12:33:00.543021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.009 [2024-11-04 12:33:00.543037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:26.009 [2024-11-04 12:33:00.553939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166ed0b0 00:28:26.009 [2024-11-04 12:33:00.554946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.010 [2024-11-04 12:33:00.554962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:26.010 [2024-11-04 12:33:00.565867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e3498 00:28:26.010 [2024-11-04 12:33:00.566875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.010 [2024-11-04 12:33:00.566891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.577773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f0788 00:28:26.271 [2024-11-04 12:33:00.578770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.578789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.589686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f5378 00:28:26.271 [2024-11-04 12:33:00.590693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.590709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.601612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f4298 00:28:26.271 [2024-11-04 12:33:00.602617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.602634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.613479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e7c50 00:28:26.271 [2024-11-04 12:33:00.614470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.614487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.625346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e7c50 00:28:26.271 [2024-11-04 12:33:00.626345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.626361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.637222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e7c50 00:28:26.271 [2024-11-04 12:33:00.638216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.638232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.650640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e7c50 00:28:26.271 [2024-11-04 12:33:00.652232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.652248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.660998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166edd58 00:28:26.271 [2024-11-04 12:33:00.661990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.662007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.672900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166edd58 00:28:26.271 [2024-11-04 12:33:00.673881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.673898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.684801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166edd58 00:28:26.271 [2024-11-04 12:33:00.685788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.685804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.696684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166edd58 00:28:26.271 [2024-11-04 12:33:00.697628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.697644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.708527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:26.271 [2024-11-04 12:33:00.709501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.709517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.720418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:26.271 [2024-11-04 12:33:00.721409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.721426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.732301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:26.271 [2024-11-04 12:33:00.733241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.733258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.744204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:26.271 [2024-11-04 12:33:00.745186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.745202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.756089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:26.271 [2024-11-04 12:33:00.757026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.757043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.767988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:26.271 [2024-11-04 12:33:00.768956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.768972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.779882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:26.271 [2024-11-04 12:33:00.780821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.780838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.791785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:26.271 [2024-11-04 12:33:00.792756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.792772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.803657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:26.271 [2024-11-04 12:33:00.804632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.804649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:26.271 [2024-11-04 12:33:00.815542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:26.271 [2024-11-04 12:33:00.816872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.271 [2024-11-04 12:33:00.816890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:26.271 21368.00 IOPS, 83.47 MiB/s [2024-11-04T11:33:00.841Z] [2024-11-04 12:33:00.827404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e6b70 00:28:26.272 [2024-11-04 12:33:00.828353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.272 [2024-11-04 12:33:00.828370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:26.534 [2024-11-04 12:33:00.840806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166ee5c8 00:28:26.534 [2024-11-04 12:33:00.842398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.534 [2024-11-04 12:33:00.842415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:26.534 [2024-11-04 12:33:00.851192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e73e0 00:28:26.534 [2024-11-04 12:33:00.852165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.534 [2024-11-04 12:33:00.852182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:26.534 [2024-11-04 12:33:00.862335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f35f0 00:28:26.534 [2024-11-04 12:33:00.863276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.534 [2024-11-04 12:33:00.863293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:26.534 [2024-11-04 12:33:00.874999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f35f0 00:28:26.534 [2024-11-04 12:33:00.875946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.534 [2024-11-04 12:33:00.875963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:26.534 [2024-11-04 12:33:00.886908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f35f0 00:28:26.534 [2024-11-04 12:33:00.887823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.534 [2024-11-04 12:33:00.887843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:26.534 [2024-11-04 12:33:00.898762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e3d08 00:28:26.534 [2024-11-04 12:33:00.899703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.534 [2024-11-04 12:33:00.899720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:26.534 [2024-11-04 12:33:00.910687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e3d08 00:28:26.534 [2024-11-04 12:33:00.911634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.534 [2024-11-04 12:33:00.911650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:26.534 [2024-11-04 12:33:00.922593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e3d08 00:28:26.534 [2024-11-04 12:33:00.923550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.534 [2024-11-04 12:33:00.923567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:26.534 [2024-11-04 12:33:00.934524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e3d08 00:28:26.534 [2024-11-04 12:33:00.935441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.534 [2024-11-04 12:33:00.935457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:26.534 [2024-11-04 12:33:00.946411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e3d08 00:28:26.534 [2024-11-04 12:33:00.947337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.534 [2024-11-04 12:33:00.947353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:26.534 [2024-11-04 12:33:00.958301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e3d08 00:28:26.534 [2024-11-04 12:33:00.959205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.534 [2024-11-04 12:33:00.959222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:26.534 [2024-11-04 12:33:00.970201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e3d08 00:28:26.534 [2024-11-04 12:33:00.971144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.534 [2024-11-04 12:33:00.971160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:26.534 [2024-11-04 12:33:00.982112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e3d08 00:28:26.534 [2024-11-04 12:33:00.983020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.534 [2024-11-04 12:33:00.983037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:26.534 [2024-11-04 12:33:00.994019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e3d08 00:28:26.534 [2024-11-04 12:33:00.994990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.534 [2024-11-04 12:33:00.995007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:26.534 [2024-11-04 12:33:01.005114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f2d80 00:28:26.534 [2024-11-04 12:33:01.006041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.534 [2024-11-04 12:33:01.006058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:26.534 [2024-11-04 12:33:01.017762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f2d80 00:28:26.534 [2024-11-04 12:33:01.018697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.535 [2024-11-04 12:33:01.018714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:26.535 [2024-11-04 12:33:01.029649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f2d80 00:28:26.535 [2024-11-04 12:33:01.030562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.535 [2024-11-04 12:33:01.030578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:26.535 [2024-11-04 12:33:01.041557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f2d80 00:28:26.535 [2024-11-04 12:33:01.042498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.535 [2024-11-04 12:33:01.042515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:26.535 [2024-11-04 12:33:01.053472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f2d80 00:28:26.535 [2024-11-04 12:33:01.054414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.535 [2024-11-04 12:33:01.054430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:26.535 [2024-11-04 12:33:01.065362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f2d80 00:28:26.535 [2024-11-04 12:33:01.066292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.535 [2024-11-04 12:33:01.066309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:26.535 [2024-11-04 12:33:01.077264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f2d80 00:28:26.535 [2024-11-04 12:33:01.078171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.535 [2024-11-04 12:33:01.078188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:26.535 [2024-11-04 12:33:01.089151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f2d80 00:28:26.535 [2024-11-04 12:33:01.090080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.535 [2024-11-04 12:33:01.090096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:26.535 [2024-11-04 12:33:01.101073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f2d80 00:28:26.796 [2024-11-04 12:33:01.101992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.796 [2024-11-04 12:33:01.102008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:26.796 [2024-11-04 12:33:01.114588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f2d80 00:28:26.796 [2024-11-04 12:33:01.116159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.796 [2024-11-04 12:33:01.116175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:26.796 [2024-11-04 12:33:01.124938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e6b70 00:28:26.796 [2024-11-04 12:33:01.125855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.796 [2024-11-04 12:33:01.125872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:26.796 [2024-11-04 12:33:01.136830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e6b70 00:28:26.796 [2024-11-04 12:33:01.137756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.796 [2024-11-04 12:33:01.137773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:26.796 [2024-11-04 12:33:01.148721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e6b70 00:28:26.796 [2024-11-04 12:33:01.149648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.796 [2024-11-04 12:33:01.149664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:26.796 [2024-11-04 12:33:01.160622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e6b70 00:28:26.796 [2024-11-04 12:33:01.161546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.796 [2024-11-04 12:33:01.161562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:26.796 [2024-11-04 12:33:01.172535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e6b70 00:28:26.796 [2024-11-04 12:33:01.173461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.796 [2024-11-04 12:33:01.173477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:26.796 [2024-11-04 12:33:01.184450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e6b70 00:28:26.796 [2024-11-04 12:33:01.185373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.796 [2024-11-04 12:33:01.185390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:26.796 [2024-11-04 12:33:01.195543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166df118 00:28:26.796 [2024-11-04 12:33:01.196452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.796 [2024-11-04 12:33:01.196471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:26.796 [2024-11-04 12:33:01.208245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166eea00 00:28:26.796 [2024-11-04 12:33:01.209155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.796 [2024-11-04 12:33:01.209172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:26.796 [2024-11-04 12:33:01.220164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166ed920 00:28:26.796 [2024-11-04 12:33:01.221081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.796 [2024-11-04 12:33:01.221099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:26.796 [2024-11-04 12:33:01.233653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e9168 00:28:26.796 [2024-11-04 12:33:01.235183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.796 [2024-11-04 12:33:01.235199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:26.796 [2024-11-04 12:33:01.244159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166ef270 00:28:26.796 [2024-11-04 12:33:01.245055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.796 [2024-11-04 12:33:01.245072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:26.796 [2024-11-04 12:33:01.256110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166ee190 00:28:26.796 [2024-11-04 12:33:01.257050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.796 [2024-11-04 12:33:01.257065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:26.796 [2024-11-04 12:33:01.268016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f92c0 00:28:26.796 [2024-11-04 12:33:01.268951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.796 [2024-11-04 12:33:01.268967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:26.796 [2024-11-04 12:33:01.281410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e6b70 00:28:26.796 [2024-11-04 12:33:01.282966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.796 [2024-11-04 12:33:01.282983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:26.796 [2024-11-04 12:33:01.291829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166ed920 00:28:26.796 [2024-11-04 12:33:01.292743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.796 [2024-11-04 12:33:01.292763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:26.796 [2024-11-04 12:33:01.302995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fe720 00:28:26.796 [2024-11-04 12:33:01.303890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.797 [2024-11-04 12:33:01.303907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:26.797 [2024-11-04 12:33:01.315671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fe720 00:28:26.797 [2024-11-04 12:33:01.316577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.797 [2024-11-04 12:33:01.316594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:26.797 [2024-11-04 12:33:01.327576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fe720 00:28:26.797 [2024-11-04 12:33:01.328449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.797 [2024-11-04 12:33:01.328465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:26.797 [2024-11-04 12:33:01.339442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:26.797 [2024-11-04 12:33:01.340337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.797 [2024-11-04 12:33:01.340353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:26.797 [2024-11-04 12:33:01.351337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:26.797 [2024-11-04 12:33:01.352227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.797 [2024-11-04 12:33:01.352244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:26.797 [2024-11-04 12:33:01.363259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:26.797 [2024-11-04 12:33:01.364162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.797 [2024-11-04 12:33:01.364178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.375160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:27.059 [2024-11-04 12:33:01.376049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.376065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.387067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:27.059 [2024-11-04 12:33:01.387968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.387984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.398975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:27.059 [2024-11-04 12:33:01.399863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.399879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.410905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:27.059 [2024-11-04 12:33:01.411790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.411807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.422820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:27.059 [2024-11-04 12:33:01.423675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.423691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.434712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fef90 00:28:27.059 [2024-11-04 12:33:01.435600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.435616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.446610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fef90 00:28:27.059 [2024-11-04 12:33:01.447498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.447515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.458532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fef90 00:28:27.059 [2024-11-04 12:33:01.459411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.459427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.470428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fef90 00:28:27.059 [2024-11-04 12:33:01.471313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.471330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.482525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fef90 00:28:27.059 [2024-11-04 12:33:01.483413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.483430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.494446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fef90 00:28:27.059 [2024-11-04 12:33:01.495332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.495349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.506350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fef90 00:28:27.059 [2024-11-04 12:33:01.507226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.507246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.518246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fef90 00:28:27.059 [2024-11-04 12:33:01.519132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.519148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.531657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fef90 00:28:27.059 [2024-11-04 12:33:01.533177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.533194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.542026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166eee38 00:28:27.059 [2024-11-04 12:33:01.542901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.542918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.553945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166eee38 00:28:27.059 [2024-11-04 12:33:01.554815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.554831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.565840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166eee38 00:28:27.059 [2024-11-04 12:33:01.566717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.566733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.577743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166eee38 00:28:27.059 [2024-11-04 12:33:01.578616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.578633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.588831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166df550 00:28:27.059 [2024-11-04 12:33:01.589680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.589696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.601478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166df550 00:28:27.059 [2024-11-04 12:33:01.602357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.602373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.059 [2024-11-04 12:33:01.613384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166df550 00:28:27.059 [2024-11-04 12:33:01.614253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.059 [2024-11-04 12:33:01.614272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.321 [2024-11-04 12:33:01.627557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166df550 00:28:27.321 [2024-11-04 12:33:01.629061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-11-04 12:33:01.629077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.321 [2024-11-04 12:33:01.638653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166df550 00:28:27.321 [2024-11-04 12:33:01.640142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-11-04 12:33:01.640158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.321 [2024-11-04 12:33:01.651295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166df550 00:28:27.321 [2024-11-04 12:33:01.652779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-11-04 12:33:01.652794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:27.321 [2024-11-04 12:33:01.663145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166eee38 00:28:27.321 [2024-11-04 12:33:01.664645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-11-04 12:33:01.664661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:27.321 [2024-11-04 12:33:01.675053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166eee38 00:28:27.321 [2024-11-04 12:33:01.676499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-11-04 12:33:01.676517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:27.321 [2024-11-04 12:33:01.686919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fef90 00:28:27.321 [2024-11-04 12:33:01.688387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-11-04 12:33:01.688404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.321 [2024-11-04 12:33:01.698818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fef90 00:28:27.321 [2024-11-04 12:33:01.700289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-11-04 12:33:01.700305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:27.321 [2024-11-04 12:33:01.710664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:27.321 [2024-11-04 12:33:01.712150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-11-04 12:33:01.712166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.321 [2024-11-04 12:33:01.722558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166f7100 00:28:27.321 [2024-11-04 12:33:01.723995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-11-04 12:33:01.724012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.321 [2024-11-04 12:33:01.734462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e6738 00:28:27.321 [2024-11-04 12:33:01.735912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-11-04 12:33:01.735928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.321 [2024-11-04 12:33:01.746415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e5a90 00:28:27.321 [2024-11-04 12:33:01.747866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-11-04 12:33:01.747882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.321 [2024-11-04 12:33:01.758319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166ff3c8 00:28:27.321 [2024-11-04 12:33:01.759757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-11-04 12:33:01.759773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.321 [2024-11-04 12:33:01.771794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fd208 00:28:27.321 [2024-11-04 12:33:01.773904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-11-04 12:33:01.773920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.321 [2024-11-04 12:33:01.782124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166e23b8 00:28:27.321 [2024-11-04 12:33:01.783594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-11-04 12:33:01.783610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:27.321 [2024-11-04 12:33:01.793221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fdeb0 00:28:27.321 [2024-11-04 12:33:01.794654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-11-04 12:33:01.794670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:27.321 [2024-11-04 12:33:01.805870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fdeb0 00:28:27.321 [2024-11-04 12:33:01.807332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.321 [2024-11-04 12:33:01.807348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:27.321 [2024-11-04 12:33:01.817771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14450) with pdu=0x2000166fdeb0 00:28:27.321 [2024-11-04 12:33:01.819214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.322 [2024-11-04 12:33:01.819231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:27.322 21417.00 IOPS, 83.66 MiB/s 00:28:27.322 Latency(us) 00:28:27.322 [2024-11-04T11:33:01.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.322 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:27.322 nvme0n1 : 2.01 21433.04 83.72 0.00 0.00 5963.32 2034.35 14199.47 00:28:27.322 [2024-11-04T11:33:01.892Z] =================================================================================================================== 00:28:27.322 [2024-11-04T11:33:01.892Z] Total : 21433.04 83.72 0.00 0.00 5963.32 2034.35 14199.47 00:28:27.322 { 00:28:27.322 "results": [ 00:28:27.322 { 00:28:27.322 "job": "nvme0n1", 00:28:27.322 "core_mask": "0x2", 00:28:27.322 "workload": "randwrite", 00:28:27.322 "status": "finished", 00:28:27.322 "queue_depth": 128, 00:28:27.322 "io_size": 4096, 00:28:27.322 "runtime": 2.005222, 00:28:27.322 "iops": 21433.038336902348, 00:28:27.322 "mibps": 83.7228060035248, 00:28:27.322 "io_failed": 0, 00:28:27.322 "io_timeout": 0, 00:28:27.322 "avg_latency_us": 5963.322064311974, 00:28:27.322 "min_latency_us": 2034.3466666666666, 00:28:27.322 "max_latency_us": 14199.466666666667 00:28:27.322 } 00:28:27.322 ], 00:28:27.322 "core_count": 1 00:28:27.322 } 00:28:27.322 12:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:27.322 12:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:27.322 | .driver_specific 00:28:27.322 | .nvme_error 00:28:27.322 | .status_code 00:28:27.322 | .command_transient_transport_error' 00:28:27.322 12:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:27.322 12:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:27.583 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 168 > 0 )) 00:28:27.583 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1814607 00:28:27.583 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1814607 ']' 00:28:27.583 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1814607 00:28:27.583 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:27.583 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:27.583 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1814607 00:28:27.583 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:27.583 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:27.583 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1814607' 00:28:27.583 killing process with pid 1814607 00:28:27.583 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1814607 00:28:27.583 Received shutdown signal, test time was about 2.000000 seconds 00:28:27.583 00:28:27.583 Latency(us) 00:28:27.583 [2024-11-04T11:33:02.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.583 [2024-11-04T11:33:02.153Z] =================================================================================================================== 00:28:27.583 [2024-11-04T11:33:02.153Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:27.583 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1814607 00:28:27.843 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:27.843 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:27.843 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:27.843 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:27.843 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:27.843 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1815403 00:28:27.843 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1815403 /var/tmp/bperf.sock 00:28:27.843 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1815403 ']' 00:28:27.843 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:27.843 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:27.843 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:27.843 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:27.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:27.843 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:27.843 12:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:27.843 [2024-11-04 12:33:02.242863] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:28:27.843 [2024-11-04 12:33:02.242918] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815403 ] 00:28:27.843 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:27.843 Zero copy mechanism will not be used. 00:28:27.843 [2024-11-04 12:33:02.319073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.843 [2024-11-04 12:33:02.347852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.784 12:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:28.784 12:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:28.784 12:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:28.784 12:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:28.784 12:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:28.784 12:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.784 12:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:28.784 12:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.784 12:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.784 12:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.045 nvme0n1 00:28:29.045 12:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:29.045 12:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.045 12:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.045 12:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.045 12:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:29.045 12:33:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:29.045 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:29.045 Zero copy mechanism will not be used. 00:28:29.045 Running I/O for 2 seconds... 00:28:29.045 [2024-11-04 12:33:03.545839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.045 [2024-11-04 12:33:03.546066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.045 [2024-11-04 12:33:03.546095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.045 [2024-11-04 12:33:03.549942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.045 [2024-11-04 12:33:03.550152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.045 [2024-11-04 12:33:03.550171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.045 [2024-11-04 12:33:03.553910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.045 [2024-11-04 12:33:03.554114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.045 [2024-11-04 12:33:03.554131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.045 [2024-11-04 12:33:03.557969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.045 [2024-11-04 12:33:03.558176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.045 [2024-11-04 12:33:03.558193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.046 [2024-11-04 12:33:03.562263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.046 [2024-11-04 12:33:03.562466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.046 [2024-11-04 12:33:03.562484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.046 [2024-11-04 12:33:03.567831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.046 [2024-11-04 12:33:03.568027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.046 [2024-11-04 12:33:03.568044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.046 [2024-11-04 12:33:03.571691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.046 [2024-11-04 12:33:03.571900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.046 [2024-11-04 12:33:03.571918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.046 [2024-11-04 12:33:03.575689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.046 [2024-11-04 12:33:03.575901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.046 [2024-11-04 12:33:03.575919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.046 [2024-11-04 12:33:03.579784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.046 [2024-11-04 12:33:03.579987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.046 [2024-11-04 12:33:03.580004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.046 [2024-11-04 12:33:03.583728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.046 [2024-11-04 12:33:03.583936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.046 [2024-11-04 12:33:03.583953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.046 [2024-11-04 12:33:03.587650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.046 [2024-11-04 12:33:03.587857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.046 [2024-11-04 12:33:03.587874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.046 [2024-11-04 12:33:03.591592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.046 [2024-11-04 12:33:03.591800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.046 [2024-11-04 12:33:03.591817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.046 [2024-11-04 12:33:03.595415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.046 [2024-11-04 12:33:03.595619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.046 [2024-11-04 12:33:03.595636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.046 [2024-11-04 12:33:03.599314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.046 [2024-11-04 12:33:03.599649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.046 [2024-11-04 12:33:03.599668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.046 [2024-11-04 12:33:03.603650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.046 [2024-11-04 12:33:03.603858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.046 [2024-11-04 12:33:03.603874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.046 [2024-11-04 12:33:03.607582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.046 [2024-11-04 12:33:03.607886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.046 [2024-11-04 12:33:03.607906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.046 [2024-11-04 12:33:03.612030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.046 [2024-11-04 12:33:03.612341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.046 [2024-11-04 12:33:03.612359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.307 [2024-11-04 12:33:03.616131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.307 [2024-11-04 12:33:03.616333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.307 [2024-11-04 12:33:03.616350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.307 [2024-11-04 12:33:03.620224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.307 [2024-11-04 12:33:03.620426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.307 [2024-11-04 12:33:03.620442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.307 [2024-11-04 12:33:03.624313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.307 [2024-11-04 12:33:03.624516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.307 [2024-11-04 12:33:03.624533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.307 [2024-11-04 12:33:03.628565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.307 [2024-11-04 12:33:03.628772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.307 [2024-11-04 12:33:03.628788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.307 [2024-11-04 12:33:03.632782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.307 [2024-11-04 12:33:03.632985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.307 [2024-11-04 12:33:03.633001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.307 [2024-11-04 12:33:03.637050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.307 [2024-11-04 12:33:03.637251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.307 [2024-11-04 12:33:03.637268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.307 [2024-11-04 12:33:03.641734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.307 [2024-11-04 12:33:03.641943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.307 [2024-11-04 12:33:03.641960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.307 [2024-11-04 12:33:03.646389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.307 [2024-11-04 12:33:03.646591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.307 [2024-11-04 12:33:03.646611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.650469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.650671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.650688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.654414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.654739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.654761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.659154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.659357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.659374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.664063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.664380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.664398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.668284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.668487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.668503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.672971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.673174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.673191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.677417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.677711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.677729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.682170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.682371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.682387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.686809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.687015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.687032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.691165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.691368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.691385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.696075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.696276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.696293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.700815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.701016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.701033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.705512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.705808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.705826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.709947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.710248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.710265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.714659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.714864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.714881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.719079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.719281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.719297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.723442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.723642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.723659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.727363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.727563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.727580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.731769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.731972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.731988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.736463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.736665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.736682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.741146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.741444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.741461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.746737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.747085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.747102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.752377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.752580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.752597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.757933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.758269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.758286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.762722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.762930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.762947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.767410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.767612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.767632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.772431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.308 [2024-11-04 12:33:03.772633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.308 [2024-11-04 12:33:03.772650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.308 [2024-11-04 12:33:03.776409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.309 [2024-11-04 12:33:03.776608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.309 [2024-11-04 12:33:03.776625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.309 [2024-11-04 12:33:03.780284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.309 [2024-11-04 12:33:03.780485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.309 [2024-11-04 12:33:03.780501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.309 [2024-11-04 12:33:03.784440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.309 [2024-11-04 12:33:03.784641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.309 [2024-11-04 12:33:03.784658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.309 [2024-11-04 12:33:03.788540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.309 [2024-11-04 12:33:03.788740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.309 [2024-11-04 12:33:03.788762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.309 [2024-11-04 12:33:03.793136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.309 [2024-11-04 12:33:03.793445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.309 [2024-11-04 12:33:03.793464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.309 [2024-11-04 12:33:03.797694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.309 [2024-11-04 12:33:03.798022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.309 [2024-11-04 12:33:03.798040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.309 [2024-11-04 12:33:03.803942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.309 [2024-11-04 12:33:03.804260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.309 [2024-11-04 12:33:03.804279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.309 [2024-11-04 12:33:03.811482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.309 [2024-11-04 12:33:03.811831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.309 [2024-11-04 12:33:03.811848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.309 [2024-11-04 12:33:03.817339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.309 [2024-11-04 12:33:03.817665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.309 [2024-11-04 12:33:03.817682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.309 [2024-11-04 12:33:03.822711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.309 [2024-11-04 12:33:03.822928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.309 [2024-11-04 12:33:03.822945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.309 [2024-11-04 12:33:03.831691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.309 [2024-11-04 12:33:03.832010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.309 [2024-11-04 12:33:03.832028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.309 [2024-11-04 12:33:03.841146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.309 [2024-11-04 12:33:03.841461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.309 [2024-11-04 12:33:03.841478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.309 [2024-11-04 12:33:03.849485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.309 [2024-11-04 12:33:03.849685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.309 [2024-11-04 12:33:03.849702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.309 [2024-11-04 12:33:03.858155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.309 [2024-11-04 12:33:03.858488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.309 [2024-11-04 12:33:03.858505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.309 [2024-11-04 12:33:03.868604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.309 [2024-11-04 12:33:03.868907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.309 [2024-11-04 12:33:03.868925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.570 [2024-11-04 12:33:03.878081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.570 [2024-11-04 12:33:03.878421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.570 [2024-11-04 12:33:03.878439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.570 [2024-11-04 12:33:03.887877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.570 [2024-11-04 12:33:03.888205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.570 [2024-11-04 12:33:03.888222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.570 [2024-11-04 12:33:03.897554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.570 [2024-11-04 12:33:03.897859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.570 [2024-11-04 12:33:03.897877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.570 [2024-11-04 12:33:03.906244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.570 [2024-11-04 12:33:03.906548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.570 [2024-11-04 12:33:03.906565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.570 [2024-11-04 12:33:03.916428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.570 [2024-11-04 12:33:03.916771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.570 [2024-11-04 12:33:03.916789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.570 [2024-11-04 12:33:03.925306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.570 [2024-11-04 12:33:03.925569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.570 [2024-11-04 12:33:03.925585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.570 [2024-11-04 12:33:03.934824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.570 [2024-11-04 12:33:03.935170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.570 [2024-11-04 12:33:03.935188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.570 [2024-11-04 12:33:03.943020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.570 [2024-11-04 12:33:03.943218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.570 [2024-11-04 12:33:03.943235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.570 [2024-11-04 12:33:03.953618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.570 [2024-11-04 12:33:03.953945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.570 [2024-11-04 12:33:03.953963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.570 [2024-11-04 12:33:03.964176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.570 [2024-11-04 12:33:03.964516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:03.964536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:03.972334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:03.972535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:03.972552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:03.982501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:03.982842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:03.982860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:03.992556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:03.992631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:03.992646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.001191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.001497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.001515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.011098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.011439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.011457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.019395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.019595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.019611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.028540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.028857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.028875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.035057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.035399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.035416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.041114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.041319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.041336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.048670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.048993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.049011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.054194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.054395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.054412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.061773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.062080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.062097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.067976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.068231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.068247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.072952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.073154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.073170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.079261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.079596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.079614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.086625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.086934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.086952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.091097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.091296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.091313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.098800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.099117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.099134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.104183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.104440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.104458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.110822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.111142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.111159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.118278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.118480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.118496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.123736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.123953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.123969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.129184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.129385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.129401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.571 [2024-11-04 12:33:04.135609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.571 [2024-11-04 12:33:04.135953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.571 [2024-11-04 12:33:04.135971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.833 [2024-11-04 12:33:04.142580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.833 [2024-11-04 12:33:04.142795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.833 [2024-11-04 12:33:04.142812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.833 [2024-11-04 12:33:04.151031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.833 [2024-11-04 12:33:04.151385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.833 [2024-11-04 12:33:04.151406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.833 [2024-11-04 12:33:04.158566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.833 [2024-11-04 12:33:04.158852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.833 [2024-11-04 12:33:04.158870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.833 [2024-11-04 12:33:04.166792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.833 [2024-11-04 12:33:04.167119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.833 [2024-11-04 12:33:04.167136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.833 [2024-11-04 12:33:04.174062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.833 [2024-11-04 12:33:04.174253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.833 [2024-11-04 12:33:04.174269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.833 [2024-11-04 12:33:04.184324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.833 [2024-11-04 12:33:04.184648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.833 [2024-11-04 12:33:04.184666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.833 [2024-11-04 12:33:04.192367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.833 [2024-11-04 12:33:04.192555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.833 [2024-11-04 12:33:04.192572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.833 [2024-11-04 12:33:04.200769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.833 [2024-11-04 12:33:04.201068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.833 [2024-11-04 12:33:04.201086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.833 [2024-11-04 12:33:04.209197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.833 [2024-11-04 12:33:04.209388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.833 [2024-11-04 12:33:04.209405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.833 [2024-11-04 12:33:04.218068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.833 [2024-11-04 12:33:04.218374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.833 [2024-11-04 12:33:04.218392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.833 [2024-11-04 12:33:04.225201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.833 [2024-11-04 12:33:04.225510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.833 [2024-11-04 12:33:04.225528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.833 [2024-11-04 12:33:04.232294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.833 [2024-11-04 12:33:04.232482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.833 [2024-11-04 12:33:04.232499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.833 [2024-11-04 12:33:04.238771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.833 [2024-11-04 12:33:04.238961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.833 [2024-11-04 12:33:04.238978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.833 [2024-11-04 12:33:04.246622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.833 [2024-11-04 12:33:04.246935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.833 [2024-11-04 12:33:04.246953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.833 [2024-11-04 12:33:04.255790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.833 [2024-11-04 12:33:04.256157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.833 [2024-11-04 12:33:04.256174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.833 [2024-11-04 12:33:04.265757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.833 [2024-11-04 12:33:04.265979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.833 [2024-11-04 12:33:04.265995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.833 [2024-11-04 12:33:04.277010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.833 [2024-11-04 12:33:04.277362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.833 [2024-11-04 12:33:04.277380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.833 [2024-11-04 12:33:04.285967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.833 [2024-11-04 12:33:04.286184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.833 [2024-11-04 12:33:04.286200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.833 [2024-11-04 12:33:04.291185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.833 [2024-11-04 12:33:04.291496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.834 [2024-11-04 12:33:04.291513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.834 [2024-11-04 12:33:04.296665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.834 [2024-11-04 12:33:04.296861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.834 [2024-11-04 12:33:04.296878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.834 [2024-11-04 12:33:04.303439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.834 [2024-11-04 12:33:04.303769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.834 [2024-11-04 12:33:04.303787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.834 [2024-11-04 12:33:04.311654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.834 [2024-11-04 12:33:04.311972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.834 [2024-11-04 12:33:04.311989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.834 [2024-11-04 12:33:04.318051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.834 [2024-11-04 12:33:04.318433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.834 [2024-11-04 12:33:04.318451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.834 [2024-11-04 12:33:04.324819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.834 [2024-11-04 12:33:04.325160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.834 [2024-11-04 12:33:04.325178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.834 [2024-11-04 12:33:04.332110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.834 [2024-11-04 12:33:04.332432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.834 [2024-11-04 12:33:04.332450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.834 [2024-11-04 12:33:04.337240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.834 [2024-11-04 12:33:04.337430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.834 [2024-11-04 12:33:04.337446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.834 [2024-11-04 12:33:04.344362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.834 [2024-11-04 12:33:04.344691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.834 [2024-11-04 12:33:04.344709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.834 [2024-11-04 12:33:04.351765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.834 [2024-11-04 12:33:04.352097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.834 [2024-11-04 12:33:04.352118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.834 [2024-11-04 12:33:04.358095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.834 [2024-11-04 12:33:04.358357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.834 [2024-11-04 12:33:04.358374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.834 [2024-11-04 12:33:04.364764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.834 [2024-11-04 12:33:04.364973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.834 [2024-11-04 12:33:04.364989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.834 [2024-11-04 12:33:04.372183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.834 [2024-11-04 12:33:04.372431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.834 [2024-11-04 12:33:04.372448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.834 [2024-11-04 12:33:04.379572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.834 [2024-11-04 12:33:04.379768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.834 [2024-11-04 12:33:04.379785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.834 [2024-11-04 12:33:04.387553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.834 [2024-11-04 12:33:04.387894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.834 [2024-11-04 12:33:04.387912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.834 [2024-11-04 12:33:04.396640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:29.834 [2024-11-04 12:33:04.396927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.834 [2024-11-04 12:33:04.396945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.096 [2024-11-04 12:33:04.402867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.096 [2024-11-04 12:33:04.403090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.096 [2024-11-04 12:33:04.403107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.096 [2024-11-04 12:33:04.410319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.096 [2024-11-04 12:33:04.410507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.096 [2024-11-04 12:33:04.410524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.096 [2024-11-04 12:33:04.415621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.096 [2024-11-04 12:33:04.415815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.096 [2024-11-04 12:33:04.415832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.096 [2024-11-04 12:33:04.422322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.096 [2024-11-04 12:33:04.422642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.096 [2024-11-04 12:33:04.422659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.096 [2024-11-04 12:33:04.428192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.096 [2024-11-04 12:33:04.428371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.096 [2024-11-04 12:33:04.428387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.096 [2024-11-04 12:33:04.434569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.434869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.434886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.443414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.443615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.443631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.450202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.450500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.450518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.458788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.459089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.459106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.466541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.466854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.466872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.473353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.473596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.473617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.478591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.478775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.478792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.486232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.486528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.486546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.493738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.494019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.494036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.501704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.501937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.501954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.510683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.510865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.510882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.519617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.519799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.519816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.526745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.527133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.527151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.533229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.533408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.533425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.539078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.539265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.539282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.097 4811.00 IOPS, 601.38 MiB/s [2024-11-04T11:33:04.667Z] [2024-11-04 12:33:04.547352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.547672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.547689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.554705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.555002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.555020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.560914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.561092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.561109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.568446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.568744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.568767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.574081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.574414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.574431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.579143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.579324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.579341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.585111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.585416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.585433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.592617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.592935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.592953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.598596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.598864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.598882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.605807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.097 [2024-11-04 12:33:04.605986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.097 [2024-11-04 12:33:04.606003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.097 [2024-11-04 12:33:04.611234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.098 [2024-11-04 12:33:04.611415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.098 [2024-11-04 12:33:04.611432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.098 [2024-11-04 12:33:04.615361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.098 [2024-11-04 12:33:04.615540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.098 [2024-11-04 12:33:04.615557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.098 [2024-11-04 12:33:04.620765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.098 [2024-11-04 12:33:04.620994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.098 [2024-11-04 12:33:04.621011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.098 [2024-11-04 12:33:04.626915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.098 [2024-11-04 12:33:04.627203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.098 [2024-11-04 12:33:04.627220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.098 [2024-11-04 12:33:04.633148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.098 [2024-11-04 12:33:04.633326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.098 [2024-11-04 12:33:04.633342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.098 [2024-11-04 12:33:04.640234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.098 [2024-11-04 12:33:04.640451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.098 [2024-11-04 12:33:04.640468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.098 [2024-11-04 12:33:04.648444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.098 [2024-11-04 12:33:04.648713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.098 [2024-11-04 12:33:04.648735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.098 [2024-11-04 12:33:04.654878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.098 [2024-11-04 12:33:04.655057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.098 [2024-11-04 12:33:04.655074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.098 [2024-11-04 12:33:04.661608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.098 [2024-11-04 12:33:04.661831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.098 [2024-11-04 12:33:04.661848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.359 [2024-11-04 12:33:04.671326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.359 [2024-11-04 12:33:04.671647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.359 [2024-11-04 12:33:04.671664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.359 [2024-11-04 12:33:04.680590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.359 [2024-11-04 12:33:04.680791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.359 [2024-11-04 12:33:04.680807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.359 [2024-11-04 12:33:04.686427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.359 [2024-11-04 12:33:04.686689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.359 [2024-11-04 12:33:04.686707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.359 [2024-11-04 12:33:04.694522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.359 [2024-11-04 12:33:04.694854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.359 [2024-11-04 12:33:04.694872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.359 [2024-11-04 12:33:04.701553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.359 [2024-11-04 12:33:04.701827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.359 [2024-11-04 12:33:04.701845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.359 [2024-11-04 12:33:04.708905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.359 [2024-11-04 12:33:04.709180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.359 [2024-11-04 12:33:04.709198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.359 [2024-11-04 12:33:04.714859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.359 [2024-11-04 12:33:04.715107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.359 [2024-11-04 12:33:04.715124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.359 [2024-11-04 12:33:04.720665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.359 [2024-11-04 12:33:04.721060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.359 [2024-11-04 12:33:04.721078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.359 [2024-11-04 12:33:04.728980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.359 [2024-11-04 12:33:04.729159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.359 [2024-11-04 12:33:04.729175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.359 [2024-11-04 12:33:04.736473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.359 [2024-11-04 12:33:04.736798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.359 [2024-11-04 12:33:04.736815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.359 [2024-11-04 12:33:04.742952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.359 [2024-11-04 12:33:04.743149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.359 [2024-11-04 12:33:04.743166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.359 [2024-11-04 12:33:04.748822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.359 [2024-11-04 12:33:04.749003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.359 [2024-11-04 12:33:04.749020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.359 [2024-11-04 12:33:04.754994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.359 [2024-11-04 12:33:04.755267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.755284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.759527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.759707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.759724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.763840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.764020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.764036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.769157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.769465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.769483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.776085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.776264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.776281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.782143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.782410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.782428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.787373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.787552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.787569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.794564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.794847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.794865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.799535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.799713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.799730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.804516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.804770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.804787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.811370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.811672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.811689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.817690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.817992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.818017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.825537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.825716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.825733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.831368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.831547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.831563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.836049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.836231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.836247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.843714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.843916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.843932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.851428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.851735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.851758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.859719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.860043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.860061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.868083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.868363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.868381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.876216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.876527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.876544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.884315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.884552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.884569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.891215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.891437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.891454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.897934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.898114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.898131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.905842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.906161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.906178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.913627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.913904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.913922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.360 [2024-11-04 12:33:04.921445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.360 [2024-11-04 12:33:04.921729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.360 [2024-11-04 12:33:04.921750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:04.927932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:04.928113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:04.928130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:04.935060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:04.935383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:04.935400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:04.944132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:04.944378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:04.944395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:04.953239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:04.953612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:04.953630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:04.958832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:04.959012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:04.959029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:04.967288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:04.967467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:04.967483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:04.976612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:04.976938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:04.976955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:04.985538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:04.985878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:04.985895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:04.994817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:04.995113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:04.995131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:05.002848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:05.003088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:05.003105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:05.012261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:05.012511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:05.012528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:05.022170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:05.022481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:05.022503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:05.029593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:05.029922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:05.029940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:05.038721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:05.039059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:05.039077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:05.049850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:05.050175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:05.050193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:05.061240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:05.061476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:05.061492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:05.072119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:05.072311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:05.072328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:05.082282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:05.082568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:05.082586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:05.093347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:05.093686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:05.093704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:05.104440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:05.104786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:05.104803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:05.115993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:05.116292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:05.116310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:05.126888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:05.127196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:05.127214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:05.138008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:05.138235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:05.138252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:05.148861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:05.149121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:05.149139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:05.158438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:05.158773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:05.158791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.622 [2024-11-04 12:33:05.167803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.622 [2024-11-04 12:33:05.168054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.622 [2024-11-04 12:33:05.168072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.623 [2024-11-04 12:33:05.174930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.623 [2024-11-04 12:33:05.175180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.623 [2024-11-04 12:33:05.175197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.623 [2024-11-04 12:33:05.181197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.623 [2024-11-04 12:33:05.181485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.623 [2024-11-04 12:33:05.181503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.623 [2024-11-04 12:33:05.187629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.623 [2024-11-04 12:33:05.187802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.623 [2024-11-04 12:33:05.187819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.885 [2024-11-04 12:33:05.196400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.885 [2024-11-04 12:33:05.196679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.885 [2024-11-04 12:33:05.196696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.885 [2024-11-04 12:33:05.201745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.885 [2024-11-04 12:33:05.201917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.885 [2024-11-04 12:33:05.201934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.885 [2024-11-04 12:33:05.205172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.885 [2024-11-04 12:33:05.205335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.885 [2024-11-04 12:33:05.205352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.885 [2024-11-04 12:33:05.209442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.885 [2024-11-04 12:33:05.209609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.885 [2024-11-04 12:33:05.209625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.885 [2024-11-04 12:33:05.215029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.885 [2024-11-04 12:33:05.215376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.885 [2024-11-04 12:33:05.215393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.885 [2024-11-04 12:33:05.223009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.885 [2024-11-04 12:33:05.223307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.885 [2024-11-04 12:33:05.223324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.885 [2024-11-04 12:33:05.233086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.885 [2024-11-04 12:33:05.233369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.885 [2024-11-04 12:33:05.233386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.885 [2024-11-04 12:33:05.243162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.885 [2024-11-04 12:33:05.243403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.885 [2024-11-04 12:33:05.243419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.885 [2024-11-04 12:33:05.253950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.885 [2024-11-04 12:33:05.254119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.885 [2024-11-04 12:33:05.254139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.885 [2024-11-04 12:33:05.264649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.885 [2024-11-04 12:33:05.264890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.885 [2024-11-04 12:33:05.264907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.885 [2024-11-04 12:33:05.275370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.885 [2024-11-04 12:33:05.275556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.885 [2024-11-04 12:33:05.275573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.885 [2024-11-04 12:33:05.286329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.885 [2024-11-04 12:33:05.286534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.885 [2024-11-04 12:33:05.286552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.885 [2024-11-04 12:33:05.297100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.885 [2024-11-04 12:33:05.297384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.885 [2024-11-04 12:33:05.297401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.885 [2024-11-04 12:33:05.307425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.885 [2024-11-04 12:33:05.307650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.885 [2024-11-04 12:33:05.307666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.885 [2024-11-04 12:33:05.318130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.318493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.318511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.328044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.328311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.328329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.338956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.339193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.339211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.349646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.349827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.349844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.359432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.359605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.359622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.369922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.370325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.370342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.379936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.380151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.380167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.391065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.391346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.391363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.398193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.398523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.398540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.406404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.406552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.406568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.414125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.414417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.414434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.418837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.418962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.418981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.422159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.422279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.422294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.425546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.425665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.425681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.428852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.428973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.428989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.432118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.432236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.432252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.435387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.435505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.435520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.438670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.438796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.438811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.442553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.442697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.442712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.447186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.447296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.447311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.886 [2024-11-04 12:33:05.450732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:30.886 [2024-11-04 12:33:05.451107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.886 [2024-11-04 12:33:05.451124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.148 [2024-11-04 12:33:05.457633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.148 [2024-11-04 12:33:05.457921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.148 [2024-11-04 12:33:05.457937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.148 [2024-11-04 12:33:05.465403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.148 [2024-11-04 12:33:05.465624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.148 [2024-11-04 12:33:05.465640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.148 [2024-11-04 12:33:05.470617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.148 [2024-11-04 12:33:05.470727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.148 [2024-11-04 12:33:05.470743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.148 [2024-11-04 12:33:05.477729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.148 [2024-11-04 12:33:05.478005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.148 [2024-11-04 12:33:05.478022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.148 [2024-11-04 12:33:05.485868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.148 [2024-11-04 12:33:05.486107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.148 [2024-11-04 12:33:05.486123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.148 [2024-11-04 12:33:05.489917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.148 [2024-11-04 12:33:05.490031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.148 [2024-11-04 12:33:05.490046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.148 [2024-11-04 12:33:05.493616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.148 [2024-11-04 12:33:05.493750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.148 [2024-11-04 12:33:05.493766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.148 [2024-11-04 12:33:05.497205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.148 [2024-11-04 12:33:05.497336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.148 [2024-11-04 12:33:05.497352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.148 [2024-11-04 12:33:05.501123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.148 [2024-11-04 12:33:05.501233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.148 [2024-11-04 12:33:05.501249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.148 [2024-11-04 12:33:05.504479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.148 [2024-11-04 12:33:05.504585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.148 [2024-11-04 12:33:05.504601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.148 [2024-11-04 12:33:05.507810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.148 [2024-11-04 12:33:05.507919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.148 [2024-11-04 12:33:05.507934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.148 [2024-11-04 12:33:05.512065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.148 [2024-11-04 12:33:05.512174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.148 [2024-11-04 12:33:05.512190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.148 [2024-11-04 12:33:05.516149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.148 [2024-11-04 12:33:05.516447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.148 [2024-11-04 12:33:05.516464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.148 [2024-11-04 12:33:05.519520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.148 [2024-11-04 12:33:05.519634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.148 [2024-11-04 12:33:05.519650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.148 [2024-11-04 12:33:05.524270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.149 [2024-11-04 12:33:05.524615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.149 [2024-11-04 12:33:05.524631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.149 [2024-11-04 12:33:05.528524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.149 [2024-11-04 12:33:05.528633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.149 [2024-11-04 12:33:05.528649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.149 [2024-11-04 12:33:05.533583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.149 [2024-11-04 12:33:05.533862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.149 [2024-11-04 12:33:05.533881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.149 [2024-11-04 12:33:05.537090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.149 [2024-11-04 12:33:05.537201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.149 [2024-11-04 12:33:05.537217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.149 [2024-11-04 12:33:05.540373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.149 [2024-11-04 12:33:05.540483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.149 [2024-11-04 12:33:05.540499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.149 4582.00 IOPS, 572.75 MiB/s [2024-11-04T11:33:05.719Z] [2024-11-04 12:33:05.546568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe14790) with pdu=0x2000166fef90 00:28:31.149 [2024-11-04 12:33:05.546891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.149 [2024-11-04 12:33:05.546907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.149 00:28:31.149 Latency(us) 00:28:31.149 [2024-11-04T11:33:05.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.149 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:31.149 nvme0n1 : 2.01 4580.52 572.57 0.00 0.00 3487.11 1474.56 11632.64 00:28:31.149 [2024-11-04T11:33:05.719Z] =================================================================================================================== 00:28:31.149 [2024-11-04T11:33:05.719Z] Total : 4580.52 572.57 0.00 0.00 3487.11 1474.56 11632.64 00:28:31.149 { 00:28:31.149 "results": [ 00:28:31.149 { 00:28:31.149 "job": "nvme0n1", 00:28:31.149 "core_mask": "0x2", 00:28:31.149 "workload": "randwrite", 00:28:31.149 "status": "finished", 00:28:31.149 "queue_depth": 16, 00:28:31.149 "io_size": 131072, 00:28:31.149 "runtime": 2.005229, 00:28:31.149 "iops": 4580.524219428305, 00:28:31.149 "mibps": 572.5655274285381, 00:28:31.149 "io_failed": 0, 00:28:31.149 "io_timeout": 0, 00:28:31.149 "avg_latency_us": 3487.1059277808026, 00:28:31.149 "min_latency_us": 1474.56, 00:28:31.149 "max_latency_us": 11632.64 00:28:31.149 } 00:28:31.149 ], 00:28:31.149 "core_count": 1 00:28:31.149 } 00:28:31.149 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:31.149 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:31.149 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:31.149 | .driver_specific 00:28:31.149 | .nvme_error 00:28:31.149 | .status_code 00:28:31.149 | .command_transient_transport_error' 00:28:31.149 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:31.409 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 296 > 0 )) 00:28:31.409 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1815403 00:28:31.409 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1815403 ']' 00:28:31.409 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1815403 00:28:31.409 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:31.409 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:31.409 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1815403 00:28:31.409 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:31.409 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:31.409 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1815403' 00:28:31.409 killing process with pid 1815403 00:28:31.409 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1815403 00:28:31.409 Received shutdown signal, test time was about 2.000000 seconds 00:28:31.409 00:28:31.409 Latency(us) 00:28:31.409 [2024-11-04T11:33:05.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.409 [2024-11-04T11:33:05.979Z] =================================================================================================================== 00:28:31.409 [2024-11-04T11:33:05.979Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:31.409 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1815403 00:28:31.409 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1812896 00:28:31.409 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1812896 ']' 00:28:31.409 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1812896 00:28:31.409 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:31.409 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:31.409 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1812896 00:28:31.669 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:31.669 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:31.669 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1812896' 00:28:31.669 killing process with pid 1812896 00:28:31.669 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1812896 00:28:31.669 12:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1812896 00:28:31.670 00:28:31.670 real 0m15.940s 00:28:31.670 user 0m31.550s 00:28:31.670 sys 0m3.443s 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.670 ************************************ 00:28:31.670 END TEST nvmf_digest_error 00:28:31.670 ************************************ 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:31.670 rmmod nvme_tcp 00:28:31.670 rmmod nvme_fabrics 00:28:31.670 rmmod nvme_keyring 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 1812896 ']' 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 1812896 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1812896 ']' 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1812896 00:28:31.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1812896) - No such process 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1812896 is not found' 00:28:31.670 Process with pid 1812896 is not found 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.670 12:33:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:34.213 00:28:34.213 real 0m41.011s 00:28:34.213 user 1m3.850s 00:28:34.213 sys 0m12.431s 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:34.213 ************************************ 00:28:34.213 END TEST nvmf_digest 00:28:34.213 ************************************ 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.213 ************************************ 00:28:34.213 START TEST nvmf_bdevperf 00:28:34.213 ************************************ 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:34.213 * Looking for test storage... 00:28:34.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:34.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.213 --rc genhtml_branch_coverage=1 00:28:34.213 --rc genhtml_function_coverage=1 00:28:34.213 --rc genhtml_legend=1 00:28:34.213 --rc geninfo_all_blocks=1 00:28:34.213 --rc geninfo_unexecuted_blocks=1 00:28:34.213 00:28:34.213 ' 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:34.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.213 --rc genhtml_branch_coverage=1 00:28:34.213 --rc genhtml_function_coverage=1 00:28:34.213 --rc genhtml_legend=1 00:28:34.213 --rc geninfo_all_blocks=1 00:28:34.213 --rc geninfo_unexecuted_blocks=1 00:28:34.213 00:28:34.213 ' 00:28:34.213 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:34.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.213 --rc genhtml_branch_coverage=1 00:28:34.213 --rc genhtml_function_coverage=1 00:28:34.213 --rc genhtml_legend=1 00:28:34.213 --rc geninfo_all_blocks=1 00:28:34.214 --rc geninfo_unexecuted_blocks=1 00:28:34.214 00:28:34.214 ' 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:34.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.214 --rc genhtml_branch_coverage=1 00:28:34.214 --rc genhtml_function_coverage=1 00:28:34.214 --rc genhtml_legend=1 00:28:34.214 --rc geninfo_all_blocks=1 00:28:34.214 --rc geninfo_unexecuted_blocks=1 00:28:34.214 00:28:34.214 ' 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:34.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:34.214 12:33:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:42.357 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:42.357 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:42.357 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:42.357 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:42.357 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:42.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:42.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:28:42.358 00:28:42.358 --- 10.0.0.2 ping statistics --- 00:28:42.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.358 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:42.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:42.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:28:42.358 00:28:42.358 --- 10.0.0.1 ping statistics --- 00:28:42.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.358 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1820781 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1820781 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1820781 ']' 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:42.358 12:33:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.358 [2024-11-04 12:33:15.820121] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:28:42.358 [2024-11-04 12:33:15.820178] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.358 [2024-11-04 12:33:15.908087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:42.358 [2024-11-04 12:33:15.961054] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.358 [2024-11-04 12:33:15.961108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.358 [2024-11-04 12:33:15.961117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.358 [2024-11-04 12:33:15.961125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.358 [2024-11-04 12:33:15.961135] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.358 [2024-11-04 12:33:15.963141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.358 [2024-11-04 12:33:15.963310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.358 [2024-11-04 12:33:15.963309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.358 [2024-11-04 12:33:16.666024] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.358 Malloc0 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.358 [2024-11-04 12:33:16.731730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:42.358 { 00:28:42.358 "params": { 00:28:42.358 "name": "Nvme$subsystem", 00:28:42.358 "trtype": "$TEST_TRANSPORT", 00:28:42.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.358 "adrfam": "ipv4", 00:28:42.358 "trsvcid": "$NVMF_PORT", 00:28:42.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.358 "hdgst": ${hdgst:-false}, 00:28:42.358 "ddgst": ${ddgst:-false} 00:28:42.358 }, 00:28:42.358 "method": "bdev_nvme_attach_controller" 00:28:42.358 } 00:28:42.358 EOF 00:28:42.358 )") 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:28:42.358 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:28:42.359 12:33:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:42.359 "params": { 00:28:42.359 "name": "Nvme1", 00:28:42.359 "trtype": "tcp", 00:28:42.359 "traddr": "10.0.0.2", 00:28:42.359 "adrfam": "ipv4", 00:28:42.359 "trsvcid": "4420", 00:28:42.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:42.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:42.359 "hdgst": false, 00:28:42.359 "ddgst": false 00:28:42.359 }, 00:28:42.359 "method": "bdev_nvme_attach_controller" 00:28:42.359 }' 00:28:42.359 [2024-11-04 12:33:16.798210] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:28:42.359 [2024-11-04 12:33:16.798264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1820908 ] 00:28:42.359 [2024-11-04 12:33:16.858777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.359 [2024-11-04 12:33:16.894702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.930 Running I/O for 1 seconds... 00:28:43.873 8874.00 IOPS, 34.66 MiB/s 00:28:43.873 Latency(us) 00:28:43.873 [2024-11-04T11:33:18.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.873 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:43.873 Verification LBA range: start 0x0 length 0x4000 00:28:43.873 Nvme1n1 : 1.01 8939.88 34.92 0.00 0.00 14229.54 1324.37 14417.92 00:28:43.873 [2024-11-04T11:33:18.443Z] =================================================================================================================== 00:28:43.873 [2024-11-04T11:33:18.443Z] Total : 8939.88 34.92 0.00 0.00 14229.54 1324.37 14417.92 00:28:43.873 12:33:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1821246 00:28:43.873 12:33:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:43.873 12:33:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:43.873 12:33:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:43.873 12:33:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:28:43.873 12:33:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:28:43.873 12:33:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:43.873 12:33:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:43.873 { 00:28:43.873 "params": { 00:28:43.873 "name": "Nvme$subsystem", 00:28:43.873 "trtype": "$TEST_TRANSPORT", 00:28:43.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.873 "adrfam": "ipv4", 00:28:43.873 "trsvcid": "$NVMF_PORT", 00:28:43.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.873 "hdgst": ${hdgst:-false}, 00:28:43.873 "ddgst": ${ddgst:-false} 00:28:43.873 }, 00:28:43.873 "method": "bdev_nvme_attach_controller" 00:28:43.873 } 00:28:43.873 EOF 00:28:43.873 )") 00:28:43.873 12:33:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:28:43.873 12:33:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:28:43.873 12:33:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:28:43.873 12:33:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:43.873 "params": { 00:28:43.873 "name": "Nvme1", 00:28:43.873 "trtype": "tcp", 00:28:43.873 "traddr": "10.0.0.2", 00:28:43.873 "adrfam": "ipv4", 00:28:43.873 "trsvcid": "4420", 00:28:43.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:43.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:43.873 "hdgst": false, 00:28:43.873 "ddgst": false 00:28:43.873 }, 00:28:43.873 "method": "bdev_nvme_attach_controller" 00:28:43.873 }' 00:28:43.873 [2024-11-04 12:33:18.388570] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:28:43.873 [2024-11-04 12:33:18.388644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1821246 ] 00:28:44.133 [2024-11-04 12:33:18.450769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.133 [2024-11-04 12:33:18.486487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.133 Running I/O for 15 seconds... 00:28:46.461 11277.00 IOPS, 44.05 MiB/s [2024-11-04T11:33:21.607Z] 11165.00 IOPS, 43.61 MiB/s [2024-11-04T11:33:21.607Z] 12:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1820781 00:28:47.037 12:33:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:47.037 [2024-11-04 12:33:21.343701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.343836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.343859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.343869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.343879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.343888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.343898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.343906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.343917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.343927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.343939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.343947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.343958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.343968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.343980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.343989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.037 [2024-11-04 12:33:21.344362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.037 [2024-11-04 12:33:21.344369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.344989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.344996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.345006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.345014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.345024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.345031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.038 [2024-11-04 12:33:21.345041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.038 [2024-11-04 12:33:21.345048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.039 [2024-11-04 12:33:21.345065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.039 [2024-11-04 12:33:21.345081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.039 [2024-11-04 12:33:21.345098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.039 [2024-11-04 12:33:21.345114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.039 [2024-11-04 12:33:21.345133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.039 [2024-11-04 12:33:21.345149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.039 [2024-11-04 12:33:21.345166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.039 [2024-11-04 12:33:21.345183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.039 [2024-11-04 12:33:21.345200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.039 [2024-11-04 12:33:21.345216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.039 [2024-11-04 12:33:21.345233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.039 [2024-11-04 12:33:21.345250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.039 [2024-11-04 12:33:21.345267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.039 [2024-11-04 12:33:21.345284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.039 [2024-11-04 12:33:21.345300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.039 [2024-11-04 12:33:21.345704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.039 [2024-11-04 12:33:21.345713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.040 [2024-11-04 12:33:21.345720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.345729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.040 [2024-11-04 12:33:21.345736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.345748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.040 [2024-11-04 12:33:21.345758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.345769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.040 [2024-11-04 12:33:21.345777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.345786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.040 [2024-11-04 12:33:21.345793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.345803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.040 [2024-11-04 12:33:21.345810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.345819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.040 [2024-11-04 12:33:21.345826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.345836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.040 [2024-11-04 12:33:21.345843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.345852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.040 [2024-11-04 12:33:21.345859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.345868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.040 [2024-11-04 12:33:21.345875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.345885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.040 [2024-11-04 12:33:21.345892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.345902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.040 [2024-11-04 12:33:21.345909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.345918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.040 [2024-11-04 12:33:21.345926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.345935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.040 [2024-11-04 12:33:21.345943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.345952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.040 [2024-11-04 12:33:21.345959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.345968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.040 [2024-11-04 12:33:21.345977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.345986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.040 [2024-11-04 12:33:21.345994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.346003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.040 [2024-11-04 12:33:21.346010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.346019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.040 [2024-11-04 12:33:21.346027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.346036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21302e0 is same with the state(6) to be set 00:28:47.040 [2024-11-04 12:33:21.346045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:47.040 [2024-11-04 12:33:21.346051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:47.040 [2024-11-04 12:33:21.346058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109968 len:8 PRP1 0x0 PRP2 0x0 00:28:47.040 [2024-11-04 12:33:21.346066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.040 [2024-11-04 12:33:21.346106] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21302e0 was disconnected and freed. reset controller. 00:28:47.040 [2024-11-04 12:33:21.349665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.040 [2024-11-04 12:33:21.349714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.040 [2024-11-04 12:33:21.350351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.040 [2024-11-04 12:33:21.350369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.040 [2024-11-04 12:33:21.350377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.040 [2024-11-04 12:33:21.350594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.040 [2024-11-04 12:33:21.350819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.040 [2024-11-04 12:33:21.350829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.040 [2024-11-04 12:33:21.350837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.040 [2024-11-04 12:33:21.354321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.040 [2024-11-04 12:33:21.363766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.040 [2024-11-04 12:33:21.364297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.040 [2024-11-04 12:33:21.364314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.040 [2024-11-04 12:33:21.364322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.040 [2024-11-04 12:33:21.364538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.040 [2024-11-04 12:33:21.364763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.040 [2024-11-04 12:33:21.364771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.040 [2024-11-04 12:33:21.364779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.040 [2024-11-04 12:33:21.368260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.040 [2024-11-04 12:33:21.377500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.040 [2024-11-04 12:33:21.377994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.040 [2024-11-04 12:33:21.378011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.040 [2024-11-04 12:33:21.378019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.040 [2024-11-04 12:33:21.378235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.040 [2024-11-04 12:33:21.378450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.040 [2024-11-04 12:33:21.378459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.040 [2024-11-04 12:33:21.378466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.040 [2024-11-04 12:33:21.381955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.040 [2024-11-04 12:33:21.391402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.040 [2024-11-04 12:33:21.392063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.040 [2024-11-04 12:33:21.392101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.040 [2024-11-04 12:33:21.392112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.040 [2024-11-04 12:33:21.392351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.040 [2024-11-04 12:33:21.392570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.040 [2024-11-04 12:33:21.392579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.040 [2024-11-04 12:33:21.392587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.040 [2024-11-04 12:33:21.396089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.040 [2024-11-04 12:33:21.405332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.040 [2024-11-04 12:33:21.405855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.040 [2024-11-04 12:33:21.405893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.040 [2024-11-04 12:33:21.405905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.040 [2024-11-04 12:33:21.406144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.040 [2024-11-04 12:33:21.406363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.041 [2024-11-04 12:33:21.406372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.041 [2024-11-04 12:33:21.406380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.041 [2024-11-04 12:33:21.409875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.041 [2024-11-04 12:33:21.419123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.041 [2024-11-04 12:33:21.419706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.041 [2024-11-04 12:33:21.419744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.041 [2024-11-04 12:33:21.419764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.041 [2024-11-04 12:33:21.419999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.041 [2024-11-04 12:33:21.420219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.041 [2024-11-04 12:33:21.420227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.041 [2024-11-04 12:33:21.420235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.041 [2024-11-04 12:33:21.423718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.041 [2024-11-04 12:33:21.432970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.041 [2024-11-04 12:33:21.433559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.041 [2024-11-04 12:33:21.433596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.041 [2024-11-04 12:33:21.433609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.041 [2024-11-04 12:33:21.433853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.041 [2024-11-04 12:33:21.434073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.041 [2024-11-04 12:33:21.434082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.041 [2024-11-04 12:33:21.434089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.041 [2024-11-04 12:33:21.437572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.041 [2024-11-04 12:33:21.446829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.041 [2024-11-04 12:33:21.447414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.041 [2024-11-04 12:33:21.447452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.041 [2024-11-04 12:33:21.447463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.041 [2024-11-04 12:33:21.447698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.041 [2024-11-04 12:33:21.447926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.041 [2024-11-04 12:33:21.447937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.041 [2024-11-04 12:33:21.447944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.041 [2024-11-04 12:33:21.451428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.041 [2024-11-04 12:33:21.460681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.041 [2024-11-04 12:33:21.461199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.041 [2024-11-04 12:33:21.461237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.041 [2024-11-04 12:33:21.461253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.041 [2024-11-04 12:33:21.461489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.041 [2024-11-04 12:33:21.461707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.041 [2024-11-04 12:33:21.461716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.041 [2024-11-04 12:33:21.461724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.041 [2024-11-04 12:33:21.465216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.041 [2024-11-04 12:33:21.474458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.041 [2024-11-04 12:33:21.475009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.041 [2024-11-04 12:33:21.475046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.041 [2024-11-04 12:33:21.475056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.041 [2024-11-04 12:33:21.475291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.041 [2024-11-04 12:33:21.475509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.041 [2024-11-04 12:33:21.475518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.041 [2024-11-04 12:33:21.475526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.041 [2024-11-04 12:33:21.479017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.041 [2024-11-04 12:33:21.488275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.041 [2024-11-04 12:33:21.488847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.041 [2024-11-04 12:33:21.488886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.041 [2024-11-04 12:33:21.488898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.041 [2024-11-04 12:33:21.489136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.041 [2024-11-04 12:33:21.489354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.041 [2024-11-04 12:33:21.489369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.041 [2024-11-04 12:33:21.489377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.041 [2024-11-04 12:33:21.492869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.041 [2024-11-04 12:33:21.502134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.041 [2024-11-04 12:33:21.502720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.041 [2024-11-04 12:33:21.502764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.041 [2024-11-04 12:33:21.502775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.041 [2024-11-04 12:33:21.503010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.041 [2024-11-04 12:33:21.503229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.041 [2024-11-04 12:33:21.503242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.041 [2024-11-04 12:33:21.503250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.041 [2024-11-04 12:33:21.506737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.041 [2024-11-04 12:33:21.515983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.041 [2024-11-04 12:33:21.516579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.042 [2024-11-04 12:33:21.516617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.042 [2024-11-04 12:33:21.516628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.042 [2024-11-04 12:33:21.516872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.042 [2024-11-04 12:33:21.517091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.042 [2024-11-04 12:33:21.517100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.042 [2024-11-04 12:33:21.517107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.042 [2024-11-04 12:33:21.520593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.042 [2024-11-04 12:33:21.529836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.042 [2024-11-04 12:33:21.530350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.042 [2024-11-04 12:33:21.530387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.042 [2024-11-04 12:33:21.530399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.042 [2024-11-04 12:33:21.530634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.042 [2024-11-04 12:33:21.530862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.042 [2024-11-04 12:33:21.530872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.042 [2024-11-04 12:33:21.530880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.042 [2024-11-04 12:33:21.534363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.042 [2024-11-04 12:33:21.543617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.042 [2024-11-04 12:33:21.544257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.042 [2024-11-04 12:33:21.544296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.042 [2024-11-04 12:33:21.544307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.042 [2024-11-04 12:33:21.544541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.042 [2024-11-04 12:33:21.544770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.042 [2024-11-04 12:33:21.544779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.042 [2024-11-04 12:33:21.544787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.042 [2024-11-04 12:33:21.548275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.042 [2024-11-04 12:33:21.557516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.042 [2024-11-04 12:33:21.558114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.042 [2024-11-04 12:33:21.558152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.042 [2024-11-04 12:33:21.558164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.042 [2024-11-04 12:33:21.558402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.042 [2024-11-04 12:33:21.558621] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.042 [2024-11-04 12:33:21.558629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.042 [2024-11-04 12:33:21.558637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.042 [2024-11-04 12:33:21.562128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.042 [2024-11-04 12:33:21.571372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.042 [2024-11-04 12:33:21.572019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.042 [2024-11-04 12:33:21.572056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.042 [2024-11-04 12:33:21.572069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.042 [2024-11-04 12:33:21.572303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.042 [2024-11-04 12:33:21.572523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.042 [2024-11-04 12:33:21.572531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.042 [2024-11-04 12:33:21.572538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.042 [2024-11-04 12:33:21.576030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.042 [2024-11-04 12:33:21.585271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.042 [2024-11-04 12:33:21.585851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.042 [2024-11-04 12:33:21.585889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.042 [2024-11-04 12:33:21.585901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.042 [2024-11-04 12:33:21.586137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.042 [2024-11-04 12:33:21.586356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.042 [2024-11-04 12:33:21.586364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.042 [2024-11-04 12:33:21.586372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.042 [2024-11-04 12:33:21.589865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.042 [2024-11-04 12:33:21.599123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.042 [2024-11-04 12:33:21.599741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.042 [2024-11-04 12:33:21.599785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.042 [2024-11-04 12:33:21.599797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.042 [2024-11-04 12:33:21.600038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.042 [2024-11-04 12:33:21.600257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.042 [2024-11-04 12:33:21.600267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.042 [2024-11-04 12:33:21.600276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.305 [2024-11-04 12:33:21.603771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.305 [2024-11-04 12:33:21.613019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.305 [2024-11-04 12:33:21.613634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-11-04 12:33:21.613671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.306 [2024-11-04 12:33:21.613683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.306 [2024-11-04 12:33:21.613926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.306 [2024-11-04 12:33:21.614146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.306 [2024-11-04 12:33:21.614154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.306 [2024-11-04 12:33:21.614162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.306 [2024-11-04 12:33:21.617645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.306 [2024-11-04 12:33:21.626896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.306 [2024-11-04 12:33:21.627592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-11-04 12:33:21.627629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.306 [2024-11-04 12:33:21.627640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.306 [2024-11-04 12:33:21.627881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.306 [2024-11-04 12:33:21.628101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.306 [2024-11-04 12:33:21.628109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.306 [2024-11-04 12:33:21.628117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.306 [2024-11-04 12:33:21.631602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.306 [2024-11-04 12:33:21.640655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.306 [2024-11-04 12:33:21.641229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-11-04 12:33:21.641248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.306 [2024-11-04 12:33:21.641256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.306 [2024-11-04 12:33:21.641472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.306 [2024-11-04 12:33:21.641687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.306 [2024-11-04 12:33:21.641695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.306 [2024-11-04 12:33:21.641706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.306 [2024-11-04 12:33:21.645189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.306 [2024-11-04 12:33:21.654429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.306 [2024-11-04 12:33:21.654956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-11-04 12:33:21.654974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.306 [2024-11-04 12:33:21.654981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.306 [2024-11-04 12:33:21.655196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.306 [2024-11-04 12:33:21.655411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.306 [2024-11-04 12:33:21.655419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.306 [2024-11-04 12:33:21.655426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.306 [2024-11-04 12:33:21.658908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.306 [2024-11-04 12:33:21.668350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.306 [2024-11-04 12:33:21.669030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-11-04 12:33:21.669067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.306 [2024-11-04 12:33:21.669077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.306 [2024-11-04 12:33:21.669312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.306 [2024-11-04 12:33:21.669531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.306 [2024-11-04 12:33:21.669539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.306 [2024-11-04 12:33:21.669547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.306 [2024-11-04 12:33:21.673044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.306 [2024-11-04 12:33:21.682088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.306 [2024-11-04 12:33:21.682777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-11-04 12:33:21.682815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.306 [2024-11-04 12:33:21.682826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.306 10001.00 IOPS, 39.07 MiB/s [2024-11-04T11:33:21.876Z] [2024-11-04 12:33:21.684714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.306 [2024-11-04 12:33:21.684937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.306 [2024-11-04 12:33:21.684946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.306 [2024-11-04 12:33:21.684954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.306 [2024-11-04 12:33:21.688434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.306 [2024-11-04 12:33:21.695836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.306 [2024-11-04 12:33:21.696482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-11-04 12:33:21.696519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.306 [2024-11-04 12:33:21.696530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.306 [2024-11-04 12:33:21.696774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.306 [2024-11-04 12:33:21.696994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.306 [2024-11-04 12:33:21.697003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.306 [2024-11-04 12:33:21.697010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.306 [2024-11-04 12:33:21.700494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.306 [2024-11-04 12:33:21.709750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.306 [2024-11-04 12:33:21.710358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-11-04 12:33:21.710396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.306 [2024-11-04 12:33:21.710407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.306 [2024-11-04 12:33:21.710641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.306 [2024-11-04 12:33:21.710870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.306 [2024-11-04 12:33:21.710880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.306 [2024-11-04 12:33:21.710888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.306 [2024-11-04 12:33:21.714371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.306 [2024-11-04 12:33:21.723615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.306 [2024-11-04 12:33:21.724267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-11-04 12:33:21.724305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.306 [2024-11-04 12:33:21.724317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.306 [2024-11-04 12:33:21.724555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.306 [2024-11-04 12:33:21.724782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.306 [2024-11-04 12:33:21.724792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.306 [2024-11-04 12:33:21.724800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.306 [2024-11-04 12:33:21.728286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.306 [2024-11-04 12:33:21.737532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.306 [2024-11-04 12:33:21.738122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-11-04 12:33:21.738160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.306 [2024-11-04 12:33:21.738172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.306 [2024-11-04 12:33:21.738408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.306 [2024-11-04 12:33:21.738631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.306 [2024-11-04 12:33:21.738640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.306 [2024-11-04 12:33:21.738647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.306 [2024-11-04 12:33:21.742153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.306 [2024-11-04 12:33:21.751396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.306 [2024-11-04 12:33:21.752042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-11-04 12:33:21.752079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.306 [2024-11-04 12:33:21.752090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.307 [2024-11-04 12:33:21.752325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.307 [2024-11-04 12:33:21.752544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.307 [2024-11-04 12:33:21.752552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.307 [2024-11-04 12:33:21.752559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.307 [2024-11-04 12:33:21.756050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.307 [2024-11-04 12:33:21.765293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.307 [2024-11-04 12:33:21.765872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-11-04 12:33:21.765910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.307 [2024-11-04 12:33:21.765922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.307 [2024-11-04 12:33:21.766159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.307 [2024-11-04 12:33:21.766378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.307 [2024-11-04 12:33:21.766387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.307 [2024-11-04 12:33:21.766394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.307 [2024-11-04 12:33:21.769886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.307 [2024-11-04 12:33:21.779127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.307 [2024-11-04 12:33:21.779759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-11-04 12:33:21.779797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.307 [2024-11-04 12:33:21.779809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.307 [2024-11-04 12:33:21.780047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.307 [2024-11-04 12:33:21.780266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.307 [2024-11-04 12:33:21.780274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.307 [2024-11-04 12:33:21.780282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.307 [2024-11-04 12:33:21.783780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.307 [2024-11-04 12:33:21.793021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.307 [2024-11-04 12:33:21.793685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-11-04 12:33:21.793722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.307 [2024-11-04 12:33:21.793734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.307 [2024-11-04 12:33:21.793979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.307 [2024-11-04 12:33:21.794199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.307 [2024-11-04 12:33:21.794207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.307 [2024-11-04 12:33:21.794215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.307 [2024-11-04 12:33:21.797707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.307 [2024-11-04 12:33:21.806752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.307 [2024-11-04 12:33:21.807426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-11-04 12:33:21.807463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.307 [2024-11-04 12:33:21.807474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.307 [2024-11-04 12:33:21.807709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.307 [2024-11-04 12:33:21.807937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.307 [2024-11-04 12:33:21.807946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.307 [2024-11-04 12:33:21.807954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.307 [2024-11-04 12:33:21.811438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.307 [2024-11-04 12:33:21.820680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.307 [2024-11-04 12:33:21.821206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-11-04 12:33:21.821244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.307 [2024-11-04 12:33:21.821255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.307 [2024-11-04 12:33:21.821490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.307 [2024-11-04 12:33:21.821709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.307 [2024-11-04 12:33:21.821717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.307 [2024-11-04 12:33:21.821725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.307 [2024-11-04 12:33:21.825217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.307 [2024-11-04 12:33:21.834463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.307 [2024-11-04 12:33:21.835132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-11-04 12:33:21.835175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.307 [2024-11-04 12:33:21.835186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.307 [2024-11-04 12:33:21.835420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.307 [2024-11-04 12:33:21.835639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.307 [2024-11-04 12:33:21.835648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.307 [2024-11-04 12:33:21.835655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.307 [2024-11-04 12:33:21.839148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.307 [2024-11-04 12:33:21.848198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.307 [2024-11-04 12:33:21.848728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-11-04 12:33:21.848751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.307 [2024-11-04 12:33:21.848760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.307 [2024-11-04 12:33:21.848975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.307 [2024-11-04 12:33:21.849190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.307 [2024-11-04 12:33:21.849198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.307 [2024-11-04 12:33:21.849205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.307 [2024-11-04 12:33:21.852692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.307 [2024-11-04 12:33:21.861952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.307 [2024-11-04 12:33:21.862353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-11-04 12:33:21.862372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.307 [2024-11-04 12:33:21.862379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.307 [2024-11-04 12:33:21.862594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.307 [2024-11-04 12:33:21.862815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.307 [2024-11-04 12:33:21.862824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.307 [2024-11-04 12:33:21.862831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.307 [2024-11-04 12:33:21.866311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.569 [2024-11-04 12:33:21.875764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.569 [2024-11-04 12:33:21.876288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.569 [2024-11-04 12:33:21.876303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.569 [2024-11-04 12:33:21.876310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.569 [2024-11-04 12:33:21.876525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.569 [2024-11-04 12:33:21.876744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.569 [2024-11-04 12:33:21.876758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.569 [2024-11-04 12:33:21.876765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.569 [2024-11-04 12:33:21.880249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.569 [2024-11-04 12:33:21.889494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.569 [2024-11-04 12:33:21.890014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.569 [2024-11-04 12:33:21.890051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.569 [2024-11-04 12:33:21.890062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.569 [2024-11-04 12:33:21.890296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.569 [2024-11-04 12:33:21.890515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.569 [2024-11-04 12:33:21.890524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.569 [2024-11-04 12:33:21.890531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.569 [2024-11-04 12:33:21.894026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.569 [2024-11-04 12:33:21.903296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.569 [2024-11-04 12:33:21.903849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.569 [2024-11-04 12:33:21.903869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.569 [2024-11-04 12:33:21.903877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.570 [2024-11-04 12:33:21.904093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.570 [2024-11-04 12:33:21.904308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.570 [2024-11-04 12:33:21.904316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.570 [2024-11-04 12:33:21.904323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.570 [2024-11-04 12:33:21.907810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.570 [2024-11-04 12:33:21.917069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.570 [2024-11-04 12:33:21.917589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.570 [2024-11-04 12:33:21.917605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.570 [2024-11-04 12:33:21.917613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.570 [2024-11-04 12:33:21.917834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.570 [2024-11-04 12:33:21.918051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.570 [2024-11-04 12:33:21.918058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.570 [2024-11-04 12:33:21.918066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.570 [2024-11-04 12:33:21.921549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.570 [2024-11-04 12:33:21.930805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.570 [2024-11-04 12:33:21.931450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.570 [2024-11-04 12:33:21.931488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.570 [2024-11-04 12:33:21.931499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.570 [2024-11-04 12:33:21.931733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.570 [2024-11-04 12:33:21.931960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.570 [2024-11-04 12:33:21.931970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.570 [2024-11-04 12:33:21.931978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.570 [2024-11-04 12:33:21.935492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.570 [2024-11-04 12:33:21.944716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.570 [2024-11-04 12:33:21.945307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.570 [2024-11-04 12:33:21.945328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.570 [2024-11-04 12:33:21.945336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.570 [2024-11-04 12:33:21.945556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.570 [2024-11-04 12:33:21.945795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.570 [2024-11-04 12:33:21.945805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.570 [2024-11-04 12:33:21.945812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.570 [2024-11-04 12:33:21.949331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.570 [2024-11-04 12:33:21.958611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.570 [2024-11-04 12:33:21.959274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.570 [2024-11-04 12:33:21.959312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.570 [2024-11-04 12:33:21.959323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.570 [2024-11-04 12:33:21.959558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.570 [2024-11-04 12:33:21.959786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.570 [2024-11-04 12:33:21.959796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.570 [2024-11-04 12:33:21.959803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.570 [2024-11-04 12:33:21.963293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.570 [2024-11-04 12:33:21.972342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.570 [2024-11-04 12:33:21.972975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.570 [2024-11-04 12:33:21.973013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.570 [2024-11-04 12:33:21.973028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.570 [2024-11-04 12:33:21.973263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.570 [2024-11-04 12:33:21.973481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.570 [2024-11-04 12:33:21.973490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.570 [2024-11-04 12:33:21.973497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.570 [2024-11-04 12:33:21.976988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.570 [2024-11-04 12:33:21.986237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.570 [2024-11-04 12:33:21.986807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.570 [2024-11-04 12:33:21.986826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.570 [2024-11-04 12:33:21.986835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.570 [2024-11-04 12:33:21.987051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.570 [2024-11-04 12:33:21.987266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.570 [2024-11-04 12:33:21.987274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.570 [2024-11-04 12:33:21.987281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.570 [2024-11-04 12:33:21.990767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.570 [2024-11-04 12:33:22.000015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.570 [2024-11-04 12:33:22.000555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.570 [2024-11-04 12:33:22.000571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.570 [2024-11-04 12:33:22.000579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.570 [2024-11-04 12:33:22.000799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.570 [2024-11-04 12:33:22.001015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.570 [2024-11-04 12:33:22.001022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.570 [2024-11-04 12:33:22.001029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.570 [2024-11-04 12:33:22.004504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.570 [2024-11-04 12:33:22.013743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.570 [2024-11-04 12:33:22.014367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.570 [2024-11-04 12:33:22.014404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.570 [2024-11-04 12:33:22.014415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.570 [2024-11-04 12:33:22.014650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.570 [2024-11-04 12:33:22.014879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.570 [2024-11-04 12:33:22.014896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.570 [2024-11-04 12:33:22.014904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.570 [2024-11-04 12:33:22.018387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.570 [2024-11-04 12:33:22.027624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.570 [2024-11-04 12:33:22.028277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.570 [2024-11-04 12:33:22.028314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.570 [2024-11-04 12:33:22.028325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.570 [2024-11-04 12:33:22.028559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.570 [2024-11-04 12:33:22.028788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.570 [2024-11-04 12:33:22.028798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.570 [2024-11-04 12:33:22.028806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.570 [2024-11-04 12:33:22.032287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.570 [2024-11-04 12:33:22.041538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.570 [2024-11-04 12:33:22.042164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.571 [2024-11-04 12:33:22.042202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.571 [2024-11-04 12:33:22.042213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.571 [2024-11-04 12:33:22.042447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.571 [2024-11-04 12:33:22.042666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.571 [2024-11-04 12:33:22.042675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.571 [2024-11-04 12:33:22.042682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.571 [2024-11-04 12:33:22.046173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.571 [2024-11-04 12:33:22.055414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.571 [2024-11-04 12:33:22.056076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.571 [2024-11-04 12:33:22.056113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.571 [2024-11-04 12:33:22.056124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.571 [2024-11-04 12:33:22.056358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.571 [2024-11-04 12:33:22.056577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.571 [2024-11-04 12:33:22.056586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.571 [2024-11-04 12:33:22.056594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.571 [2024-11-04 12:33:22.060086] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.571 [2024-11-04 12:33:22.069325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.571 [2024-11-04 12:33:22.069998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.571 [2024-11-04 12:33:22.070035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.571 [2024-11-04 12:33:22.070047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.571 [2024-11-04 12:33:22.070281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.571 [2024-11-04 12:33:22.070499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.571 [2024-11-04 12:33:22.070508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.571 [2024-11-04 12:33:22.070515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.571 [2024-11-04 12:33:22.074007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.571 [2024-11-04 12:33:22.083252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.571 [2024-11-04 12:33:22.083922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.571 [2024-11-04 12:33:22.083960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.571 [2024-11-04 12:33:22.083971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.571 [2024-11-04 12:33:22.084205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.571 [2024-11-04 12:33:22.084424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.571 [2024-11-04 12:33:22.084432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.571 [2024-11-04 12:33:22.084440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.571 [2024-11-04 12:33:22.087932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.571 [2024-11-04 12:33:22.097180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.571 [2024-11-04 12:33:22.097599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.571 [2024-11-04 12:33:22.097619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.571 [2024-11-04 12:33:22.097627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.571 [2024-11-04 12:33:22.097850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.571 [2024-11-04 12:33:22.098066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.571 [2024-11-04 12:33:22.098074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.571 [2024-11-04 12:33:22.098081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.571 [2024-11-04 12:33:22.101639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.571 [2024-11-04 12:33:22.111098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.571 [2024-11-04 12:33:22.111654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.571 [2024-11-04 12:33:22.111671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.571 [2024-11-04 12:33:22.111679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.571 [2024-11-04 12:33:22.111905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.571 [2024-11-04 12:33:22.112121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.571 [2024-11-04 12:33:22.112129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.571 [2024-11-04 12:33:22.112136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.571 [2024-11-04 12:33:22.115617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.571 [2024-11-04 12:33:22.124858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.571 [2024-11-04 12:33:22.125423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.571 [2024-11-04 12:33:22.125438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.571 [2024-11-04 12:33:22.125446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.571 [2024-11-04 12:33:22.125661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.571 [2024-11-04 12:33:22.125880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.571 [2024-11-04 12:33:22.125889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.571 [2024-11-04 12:33:22.125896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.571 [2024-11-04 12:33:22.129373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.833 [2024-11-04 12:33:22.138621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.833 [2024-11-04 12:33:22.139186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.833 [2024-11-04 12:33:22.139202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.833 [2024-11-04 12:33:22.139209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.833 [2024-11-04 12:33:22.139424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.833 [2024-11-04 12:33:22.139639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.833 [2024-11-04 12:33:22.139646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.833 [2024-11-04 12:33:22.139654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.833 [2024-11-04 12:33:22.143147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.833 [2024-11-04 12:33:22.152399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.833 [2024-11-04 12:33:22.153080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.833 [2024-11-04 12:33:22.153118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.833 [2024-11-04 12:33:22.153129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.833 [2024-11-04 12:33:22.153364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.833 [2024-11-04 12:33:22.153582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.833 [2024-11-04 12:33:22.153590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.833 [2024-11-04 12:33:22.153603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.833 [2024-11-04 12:33:22.157097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.833 [2024-11-04 12:33:22.166190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.833 [2024-11-04 12:33:22.166762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.833 [2024-11-04 12:33:22.166799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.833 [2024-11-04 12:33:22.166811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.833 [2024-11-04 12:33:22.167046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.833 [2024-11-04 12:33:22.167265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.833 [2024-11-04 12:33:22.167274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.833 [2024-11-04 12:33:22.167281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.833 [2024-11-04 12:33:22.170772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.834 [2024-11-04 12:33:22.180018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.834 [2024-11-04 12:33:22.180591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.834 [2024-11-04 12:33:22.180609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.834 [2024-11-04 12:33:22.180618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.834 [2024-11-04 12:33:22.180840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.834 [2024-11-04 12:33:22.181056] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.834 [2024-11-04 12:33:22.181065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.834 [2024-11-04 12:33:22.181072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.834 [2024-11-04 12:33:22.184552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.834 [2024-11-04 12:33:22.193790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.834 [2024-11-04 12:33:22.194374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.834 [2024-11-04 12:33:22.194390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.834 [2024-11-04 12:33:22.194397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.834 [2024-11-04 12:33:22.194612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.834 [2024-11-04 12:33:22.194832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.834 [2024-11-04 12:33:22.194840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.834 [2024-11-04 12:33:22.194847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.834 [2024-11-04 12:33:22.198331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.834 [2024-11-04 12:33:22.207567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.834 [2024-11-04 12:33:22.208124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.834 [2024-11-04 12:33:22.208140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.834 [2024-11-04 12:33:22.208147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.834 [2024-11-04 12:33:22.208362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.834 [2024-11-04 12:33:22.208577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.834 [2024-11-04 12:33:22.208584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.834 [2024-11-04 12:33:22.208591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.834 [2024-11-04 12:33:22.212071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.834 [2024-11-04 12:33:22.221304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.834 [2024-11-04 12:33:22.221869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.834 [2024-11-04 12:33:22.221885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.834 [2024-11-04 12:33:22.221893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.834 [2024-11-04 12:33:22.222107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.834 [2024-11-04 12:33:22.222322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.834 [2024-11-04 12:33:22.222330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.834 [2024-11-04 12:33:22.222336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.834 [2024-11-04 12:33:22.225814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.834 [2024-11-04 12:33:22.235052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.834 [2024-11-04 12:33:22.235611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.834 [2024-11-04 12:33:22.235626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.834 [2024-11-04 12:33:22.235633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.834 [2024-11-04 12:33:22.235853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.834 [2024-11-04 12:33:22.236069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.834 [2024-11-04 12:33:22.236076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.834 [2024-11-04 12:33:22.236083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.834 [2024-11-04 12:33:22.239558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.834 [2024-11-04 12:33:22.248809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.834 [2024-11-04 12:33:22.249460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.834 [2024-11-04 12:33:22.249496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.834 [2024-11-04 12:33:22.249507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.834 [2024-11-04 12:33:22.249742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.834 [2024-11-04 12:33:22.249976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.834 [2024-11-04 12:33:22.249985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.834 [2024-11-04 12:33:22.249993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.834 [2024-11-04 12:33:22.253474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.834 [2024-11-04 12:33:22.262710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.834 [2024-11-04 12:33:22.263386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.834 [2024-11-04 12:33:22.263424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.834 [2024-11-04 12:33:22.263435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.834 [2024-11-04 12:33:22.263670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.834 [2024-11-04 12:33:22.263898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.834 [2024-11-04 12:33:22.263908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.834 [2024-11-04 12:33:22.263915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.834 [2024-11-04 12:33:22.267396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.834 [2024-11-04 12:33:22.276440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.834 [2024-11-04 12:33:22.277070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.834 [2024-11-04 12:33:22.277108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.834 [2024-11-04 12:33:22.277121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.834 [2024-11-04 12:33:22.277357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.834 [2024-11-04 12:33:22.277577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.834 [2024-11-04 12:33:22.277586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.834 [2024-11-04 12:33:22.277593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.834 [2024-11-04 12:33:22.281084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.834 [2024-11-04 12:33:22.290329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.834 [2024-11-04 12:33:22.291052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.834 [2024-11-04 12:33:22.291089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.834 [2024-11-04 12:33:22.291100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.834 [2024-11-04 12:33:22.291335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.834 [2024-11-04 12:33:22.291554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.834 [2024-11-04 12:33:22.291563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.834 [2024-11-04 12:33:22.291570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.834 [2024-11-04 12:33:22.295064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.834 [2024-11-04 12:33:22.304112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.834 [2024-11-04 12:33:22.304650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.834 [2024-11-04 12:33:22.304669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.834 [2024-11-04 12:33:22.304677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.834 [2024-11-04 12:33:22.304897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.834 [2024-11-04 12:33:22.305113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.834 [2024-11-04 12:33:22.305121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.835 [2024-11-04 12:33:22.305128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.835 [2024-11-04 12:33:22.308606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.835 [2024-11-04 12:33:22.317846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.835 [2024-11-04 12:33:22.318408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.835 [2024-11-04 12:33:22.318424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.835 [2024-11-04 12:33:22.318432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.835 [2024-11-04 12:33:22.318647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.835 [2024-11-04 12:33:22.318868] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.835 [2024-11-04 12:33:22.318877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.835 [2024-11-04 12:33:22.318884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.835 [2024-11-04 12:33:22.322362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.835 [2024-11-04 12:33:22.331609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.835 [2024-11-04 12:33:22.332177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.835 [2024-11-04 12:33:22.332193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.835 [2024-11-04 12:33:22.332201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.835 [2024-11-04 12:33:22.332418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.835 [2024-11-04 12:33:22.332632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.835 [2024-11-04 12:33:22.332639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.835 [2024-11-04 12:33:22.332646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.835 [2024-11-04 12:33:22.336132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.835 [2024-11-04 12:33:22.345391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.835 [2024-11-04 12:33:22.345931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.835 [2024-11-04 12:33:22.345948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.835 [2024-11-04 12:33:22.345960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.835 [2024-11-04 12:33:22.346175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.835 [2024-11-04 12:33:22.346390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.835 [2024-11-04 12:33:22.346399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.835 [2024-11-04 12:33:22.346407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.835 [2024-11-04 12:33:22.349893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.835 [2024-11-04 12:33:22.359144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.835 [2024-11-04 12:33:22.359703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.835 [2024-11-04 12:33:22.359719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.835 [2024-11-04 12:33:22.359726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.835 [2024-11-04 12:33:22.359946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.835 [2024-11-04 12:33:22.360162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.835 [2024-11-04 12:33:22.360170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.835 [2024-11-04 12:33:22.360177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.835 [2024-11-04 12:33:22.363662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.835 [2024-11-04 12:33:22.372939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.835 [2024-11-04 12:33:22.373446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.835 [2024-11-04 12:33:22.373461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.835 [2024-11-04 12:33:22.373469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.835 [2024-11-04 12:33:22.373684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.835 [2024-11-04 12:33:22.373985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.835 [2024-11-04 12:33:22.373994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.835 [2024-11-04 12:33:22.374001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.835 [2024-11-04 12:33:22.377486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.835 [2024-11-04 12:33:22.386744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.835 [2024-11-04 12:33:22.387065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.835 [2024-11-04 12:33:22.387083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:47.835 [2024-11-04 12:33:22.387091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:47.835 [2024-11-04 12:33:22.387306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:47.835 [2024-11-04 12:33:22.387526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.835 [2024-11-04 12:33:22.387534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.835 [2024-11-04 12:33:22.387541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.835 [2024-11-04 12:33:22.391037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.835 [2024-11-04 12:33:22.400504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.097 [2024-11-04 12:33:22.401066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.097 [2024-11-04 12:33:22.401084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.097 [2024-11-04 12:33:22.401093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.097 [2024-11-04 12:33:22.401308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.097 [2024-11-04 12:33:22.401523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.097 [2024-11-04 12:33:22.401531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.097 [2024-11-04 12:33:22.401539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.097 [2024-11-04 12:33:22.405023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.097 [2024-11-04 12:33:22.414278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.097 [2024-11-04 12:33:22.414786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.097 [2024-11-04 12:33:22.414802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.097 [2024-11-04 12:33:22.414810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.097 [2024-11-04 12:33:22.415024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.097 [2024-11-04 12:33:22.415239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.097 [2024-11-04 12:33:22.415247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.097 [2024-11-04 12:33:22.415254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.097 [2024-11-04 12:33:22.418733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.097 [2024-11-04 12:33:22.428188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.097 [2024-11-04 12:33:22.428751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.097 [2024-11-04 12:33:22.428767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.097 [2024-11-04 12:33:22.428775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.097 [2024-11-04 12:33:22.428990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.098 [2024-11-04 12:33:22.429205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.098 [2024-11-04 12:33:22.429212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.098 [2024-11-04 12:33:22.429219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.098 [2024-11-04 12:33:22.432701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.098 [2024-11-04 12:33:22.441961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.098 [2024-11-04 12:33:22.442480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.098 [2024-11-04 12:33:22.442497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.098 [2024-11-04 12:33:22.442504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.098 [2024-11-04 12:33:22.442719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.098 [2024-11-04 12:33:22.442941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.098 [2024-11-04 12:33:22.442949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.098 [2024-11-04 12:33:22.442956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.098 [2024-11-04 12:33:22.446437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.098 [2024-11-04 12:33:22.455692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.098 [2024-11-04 12:33:22.456218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.098 [2024-11-04 12:33:22.456233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.098 [2024-11-04 12:33:22.456241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.098 [2024-11-04 12:33:22.456455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.098 [2024-11-04 12:33:22.456670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.098 [2024-11-04 12:33:22.456678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.098 [2024-11-04 12:33:22.456685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.098 [2024-11-04 12:33:22.460171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.098 [2024-11-04 12:33:22.469419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.098 [2024-11-04 12:33:22.469959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.098 [2024-11-04 12:33:22.469975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.098 [2024-11-04 12:33:22.469982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.098 [2024-11-04 12:33:22.470197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.098 [2024-11-04 12:33:22.470412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.098 [2024-11-04 12:33:22.470419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.098 [2024-11-04 12:33:22.470426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.098 [2024-11-04 12:33:22.473910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.098 [2024-11-04 12:33:22.483352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.098 [2024-11-04 12:33:22.483805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.098 [2024-11-04 12:33:22.483822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.098 [2024-11-04 12:33:22.483834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.098 [2024-11-04 12:33:22.484050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.098 [2024-11-04 12:33:22.484265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.098 [2024-11-04 12:33:22.484272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.098 [2024-11-04 12:33:22.484279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.098 [2024-11-04 12:33:22.487766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.098 [2024-11-04 12:33:22.497230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.098 [2024-11-04 12:33:22.497744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.098 [2024-11-04 12:33:22.497765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.098 [2024-11-04 12:33:22.497773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.098 [2024-11-04 12:33:22.497989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.098 [2024-11-04 12:33:22.498204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.098 [2024-11-04 12:33:22.498212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.098 [2024-11-04 12:33:22.498219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.098 [2024-11-04 12:33:22.501699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.098 [2024-11-04 12:33:22.510957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.098 [2024-11-04 12:33:22.511477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.098 [2024-11-04 12:33:22.511492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.098 [2024-11-04 12:33:22.511499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.098 [2024-11-04 12:33:22.511714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.098 [2024-11-04 12:33:22.511934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.098 [2024-11-04 12:33:22.511943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.098 [2024-11-04 12:33:22.511950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.098 [2024-11-04 12:33:22.515432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.098 [2024-11-04 12:33:22.524682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.098 [2024-11-04 12:33:22.525247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.098 [2024-11-04 12:33:22.525262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.098 [2024-11-04 12:33:22.525270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.098 [2024-11-04 12:33:22.525484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.098 [2024-11-04 12:33:22.525698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.098 [2024-11-04 12:33:22.525710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.098 [2024-11-04 12:33:22.525717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.098 [2024-11-04 12:33:22.529203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.098 [2024-11-04 12:33:22.538456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.098 [2024-11-04 12:33:22.538982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.098 [2024-11-04 12:33:22.538999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.098 [2024-11-04 12:33:22.539006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.098 [2024-11-04 12:33:22.539221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.098 [2024-11-04 12:33:22.539436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.098 [2024-11-04 12:33:22.539444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.098 [2024-11-04 12:33:22.539451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.098 [2024-11-04 12:33:22.542943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.098 [2024-11-04 12:33:22.552201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.098 [2024-11-04 12:33:22.552702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.098 [2024-11-04 12:33:22.552718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.098 [2024-11-04 12:33:22.552726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.098 [2024-11-04 12:33:22.552947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.098 [2024-11-04 12:33:22.553162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.098 [2024-11-04 12:33:22.553171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.098 [2024-11-04 12:33:22.553178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.098 [2024-11-04 12:33:22.556656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.098 [2024-11-04 12:33:22.566112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.098 [2024-11-04 12:33:22.566669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.098 [2024-11-04 12:33:22.566684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.098 [2024-11-04 12:33:22.566692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.098 [2024-11-04 12:33:22.566912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.098 [2024-11-04 12:33:22.567128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.098 [2024-11-04 12:33:22.567135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.099 [2024-11-04 12:33:22.567142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.099 [2024-11-04 12:33:22.570620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.099 [2024-11-04 12:33:22.579876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.099 [2024-11-04 12:33:22.580453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.099 [2024-11-04 12:33:22.580469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.099 [2024-11-04 12:33:22.580476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.099 [2024-11-04 12:33:22.580691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.099 [2024-11-04 12:33:22.580912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.099 [2024-11-04 12:33:22.580920] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.099 [2024-11-04 12:33:22.580927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.099 [2024-11-04 12:33:22.584410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.099 [2024-11-04 12:33:22.593660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.099 [2024-11-04 12:33:22.594217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.099 [2024-11-04 12:33:22.594232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.099 [2024-11-04 12:33:22.594240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.099 [2024-11-04 12:33:22.594454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.099 [2024-11-04 12:33:22.594669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.099 [2024-11-04 12:33:22.594676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.099 [2024-11-04 12:33:22.594683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.099 [2024-11-04 12:33:22.598179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.099 [2024-11-04 12:33:22.607435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.099 [2024-11-04 12:33:22.607960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.099 [2024-11-04 12:33:22.607976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.099 [2024-11-04 12:33:22.607983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.099 [2024-11-04 12:33:22.608198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.099 [2024-11-04 12:33:22.608413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.099 [2024-11-04 12:33:22.608421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.099 [2024-11-04 12:33:22.608428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.099 [2024-11-04 12:33:22.611911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.099 [2024-11-04 12:33:22.621166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.099 [2024-11-04 12:33:22.621682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.099 [2024-11-04 12:33:22.621698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.099 [2024-11-04 12:33:22.621705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.099 [2024-11-04 12:33:22.621931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.099 [2024-11-04 12:33:22.622147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.099 [2024-11-04 12:33:22.622155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.099 [2024-11-04 12:33:22.622162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.099 [2024-11-04 12:33:22.625644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.099 [2024-11-04 12:33:22.634898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.099 [2024-11-04 12:33:22.635457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.099 [2024-11-04 12:33:22.635473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.099 [2024-11-04 12:33:22.635480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.099 [2024-11-04 12:33:22.635695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.099 [2024-11-04 12:33:22.635916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.099 [2024-11-04 12:33:22.635925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.099 [2024-11-04 12:33:22.635932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.099 [2024-11-04 12:33:22.639413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.099 [2024-11-04 12:33:22.648671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.099 [2024-11-04 12:33:22.649200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.099 [2024-11-04 12:33:22.649216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.099 [2024-11-04 12:33:22.649223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.099 [2024-11-04 12:33:22.649437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.099 [2024-11-04 12:33:22.649652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.099 [2024-11-04 12:33:22.649660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.099 [2024-11-04 12:33:22.649667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.099 [2024-11-04 12:33:22.653150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.099 [2024-11-04 12:33:22.662402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.099 [2024-11-04 12:33:22.662934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.099 [2024-11-04 12:33:22.662950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.099 [2024-11-04 12:33:22.662957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.099 [2024-11-04 12:33:22.663172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.099 [2024-11-04 12:33:22.663386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.099 [2024-11-04 12:33:22.663394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.099 [2024-11-04 12:33:22.663404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.362 [2024-11-04 12:33:22.666890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.362 [2024-11-04 12:33:22.676144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.362 [2024-11-04 12:33:22.676658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.362 [2024-11-04 12:33:22.676673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.362 [2024-11-04 12:33:22.676681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.362 [2024-11-04 12:33:22.676900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.362 [2024-11-04 12:33:22.677116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.362 [2024-11-04 12:33:22.677123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.362 [2024-11-04 12:33:22.677130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.362 [2024-11-04 12:33:22.680611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.362 7500.75 IOPS, 29.30 MiB/s [2024-11-04T11:33:22.932Z] [2024-11-04 12:33:22.690283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.362 [2024-11-04 12:33:22.690838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.362 [2024-11-04 12:33:22.690854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.362 [2024-11-04 12:33:22.690862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.362 [2024-11-04 12:33:22.691077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.362 [2024-11-04 12:33:22.691292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.362 [2024-11-04 12:33:22.691299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.362 [2024-11-04 12:33:22.691306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.362 [2024-11-04 12:33:22.694804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.362 [2024-11-04 12:33:22.704056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.362 [2024-11-04 12:33:22.704566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.362 [2024-11-04 12:33:22.704582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.362 [2024-11-04 12:33:22.704589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.362 [2024-11-04 12:33:22.704809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.362 [2024-11-04 12:33:22.705025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.362 [2024-11-04 12:33:22.705033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.362 [2024-11-04 12:33:22.705040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.362 [2024-11-04 12:33:22.708519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.362 [2024-11-04 12:33:22.717985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.362 [2024-11-04 12:33:22.718502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.362 [2024-11-04 12:33:22.718518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.362 [2024-11-04 12:33:22.718525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.362 [2024-11-04 12:33:22.718740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.362 [2024-11-04 12:33:22.718961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.362 [2024-11-04 12:33:22.718970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.362 [2024-11-04 12:33:22.718976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.362 [2024-11-04 12:33:22.722458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.362 [2024-11-04 12:33:22.731914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.362 [2024-11-04 12:33:22.732431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.362 [2024-11-04 12:33:22.732446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.362 [2024-11-04 12:33:22.732453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.362 [2024-11-04 12:33:22.732668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.362 [2024-11-04 12:33:22.732887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.362 [2024-11-04 12:33:22.732895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.362 [2024-11-04 12:33:22.732902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.362 [2024-11-04 12:33:22.736381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.362 [2024-11-04 12:33:22.745846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.362 [2024-11-04 12:33:22.746360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.362 [2024-11-04 12:33:22.746375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.362 [2024-11-04 12:33:22.746383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.362 [2024-11-04 12:33:22.746598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.362 [2024-11-04 12:33:22.746818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.362 [2024-11-04 12:33:22.746826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.362 [2024-11-04 12:33:22.746833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.362 [2024-11-04 12:33:22.750315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.362 [2024-11-04 12:33:22.759574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.362 [2024-11-04 12:33:22.760097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.362 [2024-11-04 12:33:22.760113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.362 [2024-11-04 12:33:22.760120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.362 [2024-11-04 12:33:22.760335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.362 [2024-11-04 12:33:22.760553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.362 [2024-11-04 12:33:22.760561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.362 [2024-11-04 12:33:22.760568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.362 [2024-11-04 12:33:22.764053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.362 [2024-11-04 12:33:22.773305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.362 [2024-11-04 12:33:22.773854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.362 [2024-11-04 12:33:22.773871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.362 [2024-11-04 12:33:22.773878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.362 [2024-11-04 12:33:22.774092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.362 [2024-11-04 12:33:22.774307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.362 [2024-11-04 12:33:22.774315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.362 [2024-11-04 12:33:22.774321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.362 [2024-11-04 12:33:22.777806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.362 [2024-11-04 12:33:22.787053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.362 [2024-11-04 12:33:22.787608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.362 [2024-11-04 12:33:22.787623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.363 [2024-11-04 12:33:22.787630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.363 [2024-11-04 12:33:22.787850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.363 [2024-11-04 12:33:22.788066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.363 [2024-11-04 12:33:22.788073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.363 [2024-11-04 12:33:22.788081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.363 [2024-11-04 12:33:22.791559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.363 [2024-11-04 12:33:22.800820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.363 [2024-11-04 12:33:22.801381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.363 [2024-11-04 12:33:22.801395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.363 [2024-11-04 12:33:22.801403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.363 [2024-11-04 12:33:22.801617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.363 [2024-11-04 12:33:22.801838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.363 [2024-11-04 12:33:22.801847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.363 [2024-11-04 12:33:22.801854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.363 [2024-11-04 12:33:22.805340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.363 [2024-11-04 12:33:22.814584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.363 [2024-11-04 12:33:22.815106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.363 [2024-11-04 12:33:22.815122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.363 [2024-11-04 12:33:22.815129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.363 [2024-11-04 12:33:22.815344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.363 [2024-11-04 12:33:22.815559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.363 [2024-11-04 12:33:22.815566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.363 [2024-11-04 12:33:22.815573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.363 [2024-11-04 12:33:22.819059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.363 [2024-11-04 12:33:22.828309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.363 [2024-11-04 12:33:22.828737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.363 [2024-11-04 12:33:22.828756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.363 [2024-11-04 12:33:22.828764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.363 [2024-11-04 12:33:22.828979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.363 [2024-11-04 12:33:22.829194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.363 [2024-11-04 12:33:22.829202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.363 [2024-11-04 12:33:22.829209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.363 [2024-11-04 12:33:22.832685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.363 [2024-11-04 12:33:22.842148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.363 [2024-11-04 12:33:22.842703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.363 [2024-11-04 12:33:22.842718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.363 [2024-11-04 12:33:22.842726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.363 [2024-11-04 12:33:22.842945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.363 [2024-11-04 12:33:22.843161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.363 [2024-11-04 12:33:22.843169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.363 [2024-11-04 12:33:22.843175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.363 [2024-11-04 12:33:22.846662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.363 [2024-11-04 12:33:22.855915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.363 [2024-11-04 12:33:22.856432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.363 [2024-11-04 12:33:22.856448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.363 [2024-11-04 12:33:22.856459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.363 [2024-11-04 12:33:22.856673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.363 [2024-11-04 12:33:22.856894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.363 [2024-11-04 12:33:22.856903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.363 [2024-11-04 12:33:22.856910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.363 [2024-11-04 12:33:22.860393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.363 [2024-11-04 12:33:22.869643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.363 [2024-11-04 12:33:22.870178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.363 [2024-11-04 12:33:22.870194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.363 [2024-11-04 12:33:22.870202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.363 [2024-11-04 12:33:22.870416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.363 [2024-11-04 12:33:22.870632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.363 [2024-11-04 12:33:22.870639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.363 [2024-11-04 12:33:22.870646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.363 [2024-11-04 12:33:22.874136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.363 [2024-11-04 12:33:22.883393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.363 [2024-11-04 12:33:22.883920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.363 [2024-11-04 12:33:22.883935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.363 [2024-11-04 12:33:22.883943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.363 [2024-11-04 12:33:22.884158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.363 [2024-11-04 12:33:22.884373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.363 [2024-11-04 12:33:22.884380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.363 [2024-11-04 12:33:22.884388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.363 [2024-11-04 12:33:22.887874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.363 [2024-11-04 12:33:22.897140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.363 [2024-11-04 12:33:22.897670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.363 [2024-11-04 12:33:22.897685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.363 [2024-11-04 12:33:22.897693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.363 [2024-11-04 12:33:22.897914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.363 [2024-11-04 12:33:22.898133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.363 [2024-11-04 12:33:22.898142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.363 [2024-11-04 12:33:22.898149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.363 [2024-11-04 12:33:22.901635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.363 [2024-11-04 12:33:22.910895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.363 [2024-11-04 12:33:22.911423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.363 [2024-11-04 12:33:22.911439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.363 [2024-11-04 12:33:22.911446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.363 [2024-11-04 12:33:22.911661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.363 [2024-11-04 12:33:22.911882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.363 [2024-11-04 12:33:22.911890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.363 [2024-11-04 12:33:22.911897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.363 [2024-11-04 12:33:22.915380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.363 [2024-11-04 12:33:22.924635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.363 [2024-11-04 12:33:22.925066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.363 [2024-11-04 12:33:22.925081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.363 [2024-11-04 12:33:22.925089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.363 [2024-11-04 12:33:22.925303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.363 [2024-11-04 12:33:22.925518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.364 [2024-11-04 12:33:22.925525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.364 [2024-11-04 12:33:22.925533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.364 [2024-11-04 12:33:22.929020] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.631 [2024-11-04 12:33:22.938508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.631 [2024-11-04 12:33:22.939008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.631 [2024-11-04 12:33:22.939023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.631 [2024-11-04 12:33:22.939031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.631 [2024-11-04 12:33:22.939246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.631 [2024-11-04 12:33:22.939461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.631 [2024-11-04 12:33:22.939468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.631 [2024-11-04 12:33:22.939475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.631 [2024-11-04 12:33:22.942962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.631 [2024-11-04 12:33:22.952233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.631 [2024-11-04 12:33:22.952793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.631 [2024-11-04 12:33:22.952809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.631 [2024-11-04 12:33:22.952817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.631 [2024-11-04 12:33:22.953032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.631 [2024-11-04 12:33:22.953247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.631 [2024-11-04 12:33:22.953254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.631 [2024-11-04 12:33:22.953261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.631 [2024-11-04 12:33:22.956744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.631 [2024-11-04 12:33:22.966004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.631 [2024-11-04 12:33:22.966516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.631 [2024-11-04 12:33:22.966532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.631 [2024-11-04 12:33:22.966539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.631 [2024-11-04 12:33:22.966761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.631 [2024-11-04 12:33:22.966978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.631 [2024-11-04 12:33:22.966986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.631 [2024-11-04 12:33:22.966993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.631 [2024-11-04 12:33:22.970474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.632 [2024-11-04 12:33:22.979726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.632 [2024-11-04 12:33:22.980291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.632 [2024-11-04 12:33:22.980307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.632 [2024-11-04 12:33:22.980315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.632 [2024-11-04 12:33:22.980530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.632 [2024-11-04 12:33:22.980750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.632 [2024-11-04 12:33:22.980759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.632 [2024-11-04 12:33:22.980766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.632 [2024-11-04 12:33:22.984248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.632 [2024-11-04 12:33:22.993586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.632 [2024-11-04 12:33:22.994162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.632 [2024-11-04 12:33:22.994185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.632 [2024-11-04 12:33:22.994197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.632 [2024-11-04 12:33:22.994416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.632 [2024-11-04 12:33:22.994635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.632 [2024-11-04 12:33:22.994643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.632 [2024-11-04 12:33:22.994651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.632 [2024-11-04 12:33:22.998197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.632 [2024-11-04 12:33:23.007397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.632 [2024-11-04 12:33:23.007962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.632 [2024-11-04 12:33:23.007987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.632 [2024-11-04 12:33:23.007998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.632 [2024-11-04 12:33:23.008221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.632 [2024-11-04 12:33:23.008441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.632 [2024-11-04 12:33:23.008449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.632 [2024-11-04 12:33:23.008456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.632 [2024-11-04 12:33:23.011991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.632 [2024-11-04 12:33:23.021350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.632 [2024-11-04 12:33:23.021894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.632 [2024-11-04 12:33:23.021912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.632 [2024-11-04 12:33:23.021920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.632 [2024-11-04 12:33:23.022139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.632 [2024-11-04 12:33:23.022357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.632 [2024-11-04 12:33:23.022366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.632 [2024-11-04 12:33:23.022378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.632 [2024-11-04 12:33:23.025902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.632 [2024-11-04 12:33:23.035298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.632 [2024-11-04 12:33:23.035848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.632 [2024-11-04 12:33:23.035867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.632 [2024-11-04 12:33:23.035874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.632 [2024-11-04 12:33:23.036093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.632 [2024-11-04 12:33:23.036312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.632 [2024-11-04 12:33:23.036325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.632 [2024-11-04 12:33:23.036333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.632 [2024-11-04 12:33:23.039846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.632 [2024-11-04 12:33:23.049103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.632 [2024-11-04 12:33:23.049653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.632 [2024-11-04 12:33:23.049669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.632 [2024-11-04 12:33:23.049677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.632 [2024-11-04 12:33:23.049896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.632 [2024-11-04 12:33:23.050112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.632 [2024-11-04 12:33:23.050119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.632 [2024-11-04 12:33:23.050126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.632 [2024-11-04 12:33:23.053603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.632 [2024-11-04 12:33:23.062879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.632 [2024-11-04 12:33:23.063391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.632 [2024-11-04 12:33:23.063429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.632 [2024-11-04 12:33:23.063440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.632 [2024-11-04 12:33:23.063674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.632 [2024-11-04 12:33:23.063900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.632 [2024-11-04 12:33:23.063910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.632 [2024-11-04 12:33:23.063918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.632 [2024-11-04 12:33:23.067400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.632 [2024-11-04 12:33:23.076646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.632 [2024-11-04 12:33:23.077305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.632 [2024-11-04 12:33:23.077342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.632 [2024-11-04 12:33:23.077353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.632 [2024-11-04 12:33:23.077588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.632 [2024-11-04 12:33:23.077815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.632 [2024-11-04 12:33:23.077824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.632 [2024-11-04 12:33:23.077831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.632 [2024-11-04 12:33:23.081321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.632 [2024-11-04 12:33:23.090568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.632 [2024-11-04 12:33:23.091254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.632 [2024-11-04 12:33:23.091292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.632 [2024-11-04 12:33:23.091303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.632 [2024-11-04 12:33:23.091538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.632 [2024-11-04 12:33:23.091766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.632 [2024-11-04 12:33:23.091775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.632 [2024-11-04 12:33:23.091783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.632 [2024-11-04 12:33:23.095275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.632 [2024-11-04 12:33:23.104314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.632 [2024-11-04 12:33:23.104985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.632 [2024-11-04 12:33:23.105022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.632 [2024-11-04 12:33:23.105034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.632 [2024-11-04 12:33:23.105272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.632 [2024-11-04 12:33:23.105490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.632 [2024-11-04 12:33:23.105499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.632 [2024-11-04 12:33:23.105507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.632 [2024-11-04 12:33:23.108999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.632 [2024-11-04 12:33:23.118043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.632 [2024-11-04 12:33:23.118660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.632 [2024-11-04 12:33:23.118697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.633 [2024-11-04 12:33:23.118709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.633 [2024-11-04 12:33:23.118957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.633 [2024-11-04 12:33:23.119177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.633 [2024-11-04 12:33:23.119185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.633 [2024-11-04 12:33:23.119193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.633 [2024-11-04 12:33:23.122674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.633 [2024-11-04 12:33:23.131923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.633 [2024-11-04 12:33:23.132608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.633 [2024-11-04 12:33:23.132645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.633 [2024-11-04 12:33:23.132656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.633 [2024-11-04 12:33:23.132906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.633 [2024-11-04 12:33:23.133126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.633 [2024-11-04 12:33:23.133134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.633 [2024-11-04 12:33:23.133142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.633 [2024-11-04 12:33:23.136624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.633 [2024-11-04 12:33:23.145677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.633 [2024-11-04 12:33:23.146351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.633 [2024-11-04 12:33:23.146389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.633 [2024-11-04 12:33:23.146400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.633 [2024-11-04 12:33:23.146635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.633 [2024-11-04 12:33:23.146862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.633 [2024-11-04 12:33:23.146871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.633 [2024-11-04 12:33:23.146879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.633 [2024-11-04 12:33:23.150365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.633 [2024-11-04 12:33:23.159401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.633 [2024-11-04 12:33:23.160039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.633 [2024-11-04 12:33:23.160077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.633 [2024-11-04 12:33:23.160088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.633 [2024-11-04 12:33:23.160323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.633 [2024-11-04 12:33:23.160542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.633 [2024-11-04 12:33:23.160550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.633 [2024-11-04 12:33:23.160558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.633 [2024-11-04 12:33:23.164054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.633 [2024-11-04 12:33:23.173302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.633 [2024-11-04 12:33:23.173878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.633 [2024-11-04 12:33:23.173916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.633 [2024-11-04 12:33:23.173928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.633 [2024-11-04 12:33:23.174166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.633 [2024-11-04 12:33:23.174385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.633 [2024-11-04 12:33:23.174393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.633 [2024-11-04 12:33:23.174405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.633 [2024-11-04 12:33:23.177898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.633 [2024-11-04 12:33:23.187139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.633 [2024-11-04 12:33:23.187803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.633 [2024-11-04 12:33:23.187841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.633 [2024-11-04 12:33:23.187853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.633 [2024-11-04 12:33:23.188089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.633 [2024-11-04 12:33:23.188308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.633 [2024-11-04 12:33:23.188317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.633 [2024-11-04 12:33:23.188324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.633 [2024-11-04 12:33:23.191819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.967 [2024-11-04 12:33:23.200875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.967 [2024-11-04 12:33:23.201498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.967 [2024-11-04 12:33:23.201535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.967 [2024-11-04 12:33:23.201546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.967 [2024-11-04 12:33:23.201790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.967 [2024-11-04 12:33:23.202010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.967 [2024-11-04 12:33:23.202018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.967 [2024-11-04 12:33:23.202025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.967 [2024-11-04 12:33:23.205509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.967 [2024-11-04 12:33:23.214765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.967 [2024-11-04 12:33:23.215409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.967 [2024-11-04 12:33:23.215446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.967 [2024-11-04 12:33:23.215458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.967 [2024-11-04 12:33:23.215694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.967 [2024-11-04 12:33:23.215923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.967 [2024-11-04 12:33:23.215932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.967 [2024-11-04 12:33:23.215940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.967 [2024-11-04 12:33:23.219424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.967 [2024-11-04 12:33:23.228669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.967 [2024-11-04 12:33:23.229240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.967 [2024-11-04 12:33:23.229265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.967 [2024-11-04 12:33:23.229273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.967 [2024-11-04 12:33:23.229489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.967 [2024-11-04 12:33:23.229704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.967 [2024-11-04 12:33:23.229712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.967 [2024-11-04 12:33:23.229719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.967 [2024-11-04 12:33:23.233208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.967 [2024-11-04 12:33:23.242448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.967 [2024-11-04 12:33:23.242985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.967 [2024-11-04 12:33:23.243003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.967 [2024-11-04 12:33:23.243010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.967 [2024-11-04 12:33:23.243225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.967 [2024-11-04 12:33:23.243440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.967 [2024-11-04 12:33:23.243448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.967 [2024-11-04 12:33:23.243455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.967 [2024-11-04 12:33:23.246944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.967 [2024-11-04 12:33:23.256183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.967 [2024-11-04 12:33:23.256741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.967 [2024-11-04 12:33:23.256762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.967 [2024-11-04 12:33:23.256770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.967 [2024-11-04 12:33:23.256985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.967 [2024-11-04 12:33:23.257199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.967 [2024-11-04 12:33:23.257207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.967 [2024-11-04 12:33:23.257214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.967 [2024-11-04 12:33:23.260688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.967 [2024-11-04 12:33:23.269934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.967 [2024-11-04 12:33:23.270444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.967 [2024-11-04 12:33:23.270460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.967 [2024-11-04 12:33:23.270467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.967 [2024-11-04 12:33:23.270682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.967 [2024-11-04 12:33:23.270907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.967 [2024-11-04 12:33:23.270916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.967 [2024-11-04 12:33:23.270923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.967 [2024-11-04 12:33:23.274399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.967 [2024-11-04 12:33:23.283845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.967 [2024-11-04 12:33:23.284494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.967 [2024-11-04 12:33:23.284532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.967 [2024-11-04 12:33:23.284543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.967 [2024-11-04 12:33:23.284786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.967 [2024-11-04 12:33:23.285006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.967 [2024-11-04 12:33:23.285014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.967 [2024-11-04 12:33:23.285023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.967 [2024-11-04 12:33:23.288505] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.967 [2024-11-04 12:33:23.297757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.967 [2024-11-04 12:33:23.298416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.967 [2024-11-04 12:33:23.298453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.967 [2024-11-04 12:33:23.298464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.968 [2024-11-04 12:33:23.298698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.968 [2024-11-04 12:33:23.298926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.968 [2024-11-04 12:33:23.298936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.968 [2024-11-04 12:33:23.298943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.968 [2024-11-04 12:33:23.302429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.968 [2024-11-04 12:33:23.311671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.968 [2024-11-04 12:33:23.312288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.968 [2024-11-04 12:33:23.312325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.968 [2024-11-04 12:33:23.312336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.968 [2024-11-04 12:33:23.312571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.968 [2024-11-04 12:33:23.312797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.968 [2024-11-04 12:33:23.312807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.968 [2024-11-04 12:33:23.312815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.968 [2024-11-04 12:33:23.316337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.968 [2024-11-04 12:33:23.325583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.968 [2024-11-04 12:33:23.326250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.968 [2024-11-04 12:33:23.326287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.968 [2024-11-04 12:33:23.326298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.968 [2024-11-04 12:33:23.326533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.968 [2024-11-04 12:33:23.326760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.968 [2024-11-04 12:33:23.326769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.968 [2024-11-04 12:33:23.326777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.968 [2024-11-04 12:33:23.330261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.968 [2024-11-04 12:33:23.339503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.968 [2024-11-04 12:33:23.340167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.968 [2024-11-04 12:33:23.340205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.968 [2024-11-04 12:33:23.340216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.968 [2024-11-04 12:33:23.340451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.968 [2024-11-04 12:33:23.340670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.968 [2024-11-04 12:33:23.340678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.968 [2024-11-04 12:33:23.340686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.968 [2024-11-04 12:33:23.344177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.968 [2024-11-04 12:33:23.353222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.968 [2024-11-04 12:33:23.353758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.968 [2024-11-04 12:33:23.353778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.968 [2024-11-04 12:33:23.353786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.968 [2024-11-04 12:33:23.354003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.968 [2024-11-04 12:33:23.354218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.968 [2024-11-04 12:33:23.354226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.968 [2024-11-04 12:33:23.354233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.968 [2024-11-04 12:33:23.357711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.968 [2024-11-04 12:33:23.366957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.968 [2024-11-04 12:33:23.367523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.968 [2024-11-04 12:33:23.367539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.968 [2024-11-04 12:33:23.367551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.968 [2024-11-04 12:33:23.367771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.968 [2024-11-04 12:33:23.367987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.968 [2024-11-04 12:33:23.367995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.968 [2024-11-04 12:33:23.368002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.968 [2024-11-04 12:33:23.371478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.968 [2024-11-04 12:33:23.380716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.968 [2024-11-04 12:33:23.381238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.968 [2024-11-04 12:33:23.381254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.968 [2024-11-04 12:33:23.381261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.968 [2024-11-04 12:33:23.381476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.968 [2024-11-04 12:33:23.381690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.968 [2024-11-04 12:33:23.381698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.968 [2024-11-04 12:33:23.381705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.968 [2024-11-04 12:33:23.385186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.968 [2024-11-04 12:33:23.394627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.968 [2024-11-04 12:33:23.395183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.968 [2024-11-04 12:33:23.395198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.968 [2024-11-04 12:33:23.395206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.968 [2024-11-04 12:33:23.395421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.968 [2024-11-04 12:33:23.395635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.968 [2024-11-04 12:33:23.395643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.968 [2024-11-04 12:33:23.395651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.968 [2024-11-04 12:33:23.399140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.968 [2024-11-04 12:33:23.408460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.968 [2024-11-04 12:33:23.408975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.968 [2024-11-04 12:33:23.408993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.968 [2024-11-04 12:33:23.409000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.968 [2024-11-04 12:33:23.409215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.968 [2024-11-04 12:33:23.409430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.968 [2024-11-04 12:33:23.409445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.968 [2024-11-04 12:33:23.409452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.968 [2024-11-04 12:33:23.412931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.968 [2024-11-04 12:33:23.422370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.968 [2024-11-04 12:33:23.423042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.968 [2024-11-04 12:33:23.423079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.968 [2024-11-04 12:33:23.423090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.968 [2024-11-04 12:33:23.423324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.968 [2024-11-04 12:33:23.423543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.968 [2024-11-04 12:33:23.423552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.969 [2024-11-04 12:33:23.423559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.969 [2024-11-04 12:33:23.427050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.969 [2024-11-04 12:33:23.436290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.969 [2024-11-04 12:33:23.436892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.969 [2024-11-04 12:33:23.436929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.969 [2024-11-04 12:33:23.436940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.969 [2024-11-04 12:33:23.437175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.969 [2024-11-04 12:33:23.437393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.969 [2024-11-04 12:33:23.437402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.969 [2024-11-04 12:33:23.437410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.969 [2024-11-04 12:33:23.440902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.969 [2024-11-04 12:33:23.450190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.969 [2024-11-04 12:33:23.450869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.969 [2024-11-04 12:33:23.450906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.969 [2024-11-04 12:33:23.450919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.969 [2024-11-04 12:33:23.451155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.969 [2024-11-04 12:33:23.451375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.969 [2024-11-04 12:33:23.451383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.969 [2024-11-04 12:33:23.451390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.969 [2024-11-04 12:33:23.454886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.969 [2024-11-04 12:33:23.463929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.969 [2024-11-04 12:33:23.464596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.969 [2024-11-04 12:33:23.464633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.969 [2024-11-04 12:33:23.464644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.969 [2024-11-04 12:33:23.464887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.969 [2024-11-04 12:33:23.465107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.969 [2024-11-04 12:33:23.465115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.969 [2024-11-04 12:33:23.465123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.969 [2024-11-04 12:33:23.468606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.969 [2024-11-04 12:33:23.477670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.969 [2024-11-04 12:33:23.478337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.969 [2024-11-04 12:33:23.478375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.969 [2024-11-04 12:33:23.478386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.969 [2024-11-04 12:33:23.478620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.969 [2024-11-04 12:33:23.478849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.969 [2024-11-04 12:33:23.478858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.969 [2024-11-04 12:33:23.478866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.969 [2024-11-04 12:33:23.482535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.969 [2024-11-04 12:33:23.491585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.969 [2024-11-04 12:33:23.492220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.969 [2024-11-04 12:33:23.492257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.969 [2024-11-04 12:33:23.492268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.969 [2024-11-04 12:33:23.492503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.969 [2024-11-04 12:33:23.492722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.969 [2024-11-04 12:33:23.492730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.969 [2024-11-04 12:33:23.492737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.969 [2024-11-04 12:33:23.496236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.969 [2024-11-04 12:33:23.505484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.969 [2024-11-04 12:33:23.506176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.969 [2024-11-04 12:33:23.506213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.969 [2024-11-04 12:33:23.506228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.969 [2024-11-04 12:33:23.506463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.969 [2024-11-04 12:33:23.506682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.969 [2024-11-04 12:33:23.506690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.969 [2024-11-04 12:33:23.506698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.969 [2024-11-04 12:33:23.510200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.969 [2024-11-04 12:33:23.519244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.969 [2024-11-04 12:33:23.519845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.969 [2024-11-04 12:33:23.519883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:48.969 [2024-11-04 12:33:23.519894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:48.969 [2024-11-04 12:33:23.520129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:48.969 [2024-11-04 12:33:23.520348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.969 [2024-11-04 12:33:23.520356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.969 [2024-11-04 12:33:23.520364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.969 [2024-11-04 12:33:23.523857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.969 [2024-11-04 12:33:23.533103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.284 [2024-11-04 12:33:23.533729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.284 [2024-11-04 12:33:23.533774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.284 [2024-11-04 12:33:23.533786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.284 [2024-11-04 12:33:23.534021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.284 [2024-11-04 12:33:23.534240] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.284 [2024-11-04 12:33:23.534248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.284 [2024-11-04 12:33:23.534256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.284 [2024-11-04 12:33:23.537742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.284 [2024-11-04 12:33:23.547005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.284 [2024-11-04 12:33:23.547575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.284 [2024-11-04 12:33:23.547593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.284 [2024-11-04 12:33:23.547602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.284 [2024-11-04 12:33:23.547824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.284 [2024-11-04 12:33:23.548040] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.284 [2024-11-04 12:33:23.548048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.284 [2024-11-04 12:33:23.548060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.284 [2024-11-04 12:33:23.551540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.284 [2024-11-04 12:33:23.560777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.284 [2024-11-04 12:33:23.561297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.284 [2024-11-04 12:33:23.561312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.284 [2024-11-04 12:33:23.561320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.284 [2024-11-04 12:33:23.561535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.284 [2024-11-04 12:33:23.561755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.284 [2024-11-04 12:33:23.561763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.284 [2024-11-04 12:33:23.561770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.284 [2024-11-04 12:33:23.565248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.284 [2024-11-04 12:33:23.574687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.284 [2024-11-04 12:33:23.575224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.284 [2024-11-04 12:33:23.575240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.284 [2024-11-04 12:33:23.575247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.284 [2024-11-04 12:33:23.575462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.284 [2024-11-04 12:33:23.575677] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.284 [2024-11-04 12:33:23.575684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.284 [2024-11-04 12:33:23.575691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.284 [2024-11-04 12:33:23.579172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.284 [2024-11-04 12:33:23.588612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.284 [2024-11-04 12:33:23.589269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.284 [2024-11-04 12:33:23.589307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.284 [2024-11-04 12:33:23.589318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.284 [2024-11-04 12:33:23.589552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.284 [2024-11-04 12:33:23.589779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.284 [2024-11-04 12:33:23.589789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.284 [2024-11-04 12:33:23.589796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.284 [2024-11-04 12:33:23.593280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.284 [2024-11-04 12:33:23.602338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.284 [2024-11-04 12:33:23.602875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.284 [2024-11-04 12:33:23.602913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.284 [2024-11-04 12:33:23.602925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.284 [2024-11-04 12:33:23.603163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.284 [2024-11-04 12:33:23.603381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.284 [2024-11-04 12:33:23.603390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.284 [2024-11-04 12:33:23.603397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.284 [2024-11-04 12:33:23.606889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.284 [2024-11-04 12:33:23.616133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.284 [2024-11-04 12:33:23.616771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.284 [2024-11-04 12:33:23.616809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.284 [2024-11-04 12:33:23.616821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.284 [2024-11-04 12:33:23.617059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.284 [2024-11-04 12:33:23.617278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.284 [2024-11-04 12:33:23.617286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.284 [2024-11-04 12:33:23.617294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.284 [2024-11-04 12:33:23.620783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.284 [2024-11-04 12:33:23.630024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.284 [2024-11-04 12:33:23.630666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.284 [2024-11-04 12:33:23.630703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.284 [2024-11-04 12:33:23.630715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.284 [2024-11-04 12:33:23.630960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.284 [2024-11-04 12:33:23.631181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.284 [2024-11-04 12:33:23.631190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.284 [2024-11-04 12:33:23.631198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.284 [2024-11-04 12:33:23.634685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.284 [2024-11-04 12:33:23.643935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.284 [2024-11-04 12:33:23.644583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.284 [2024-11-04 12:33:23.644620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.285 [2024-11-04 12:33:23.644631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.285 [2024-11-04 12:33:23.644877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.285 [2024-11-04 12:33:23.645097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.285 [2024-11-04 12:33:23.645105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.285 [2024-11-04 12:33:23.645113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.285 [2024-11-04 12:33:23.648608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.285 [2024-11-04 12:33:23.657858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.285 [2024-11-04 12:33:23.658501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.285 [2024-11-04 12:33:23.658538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.285 [2024-11-04 12:33:23.658549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.285 [2024-11-04 12:33:23.658793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.285 [2024-11-04 12:33:23.659013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.285 [2024-11-04 12:33:23.659021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.285 [2024-11-04 12:33:23.659029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.285 [2024-11-04 12:33:23.662512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.285 [2024-11-04 12:33:23.671758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.285 [2024-11-04 12:33:23.672327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.285 [2024-11-04 12:33:23.672346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.285 [2024-11-04 12:33:23.672354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.285 [2024-11-04 12:33:23.672569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.285 [2024-11-04 12:33:23.672790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.285 [2024-11-04 12:33:23.672799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.285 [2024-11-04 12:33:23.672806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.285 [2024-11-04 12:33:23.676284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.285 [2024-11-04 12:33:23.685533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.285 [2024-11-04 12:33:23.686097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.285 [2024-11-04 12:33:23.686113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.285 [2024-11-04 12:33:23.686121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.285 [2024-11-04 12:33:23.686336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.285 [2024-11-04 12:33:23.686550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.285 [2024-11-04 12:33:23.686558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.285 [2024-11-04 12:33:23.686569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.285 6000.60 IOPS, 23.44 MiB/s [2024-11-04T11:33:23.855Z] [2024-11-04 12:33:23.691694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.285 [2024-11-04 12:33:23.699307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.285 [2024-11-04 12:33:23.699968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.285 [2024-11-04 12:33:23.700006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.285 [2024-11-04 12:33:23.700017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.285 [2024-11-04 12:33:23.700252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.285 [2024-11-04 12:33:23.700471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.285 [2024-11-04 12:33:23.700479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.285 [2024-11-04 12:33:23.700488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.285 [2024-11-04 12:33:23.703988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.285 [2024-11-04 12:33:23.713029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.285 [2024-11-04 12:33:23.713693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.285 [2024-11-04 12:33:23.713730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.285 [2024-11-04 12:33:23.713743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.285 [2024-11-04 12:33:23.713991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.285 [2024-11-04 12:33:23.714210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.285 [2024-11-04 12:33:23.714219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.285 [2024-11-04 12:33:23.714227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.285 [2024-11-04 12:33:23.717716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.285 [2024-11-04 12:33:23.726760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.285 [2024-11-04 12:33:23.727424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.285 [2024-11-04 12:33:23.727462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.285 [2024-11-04 12:33:23.727473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.285 [2024-11-04 12:33:23.727708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.285 [2024-11-04 12:33:23.727937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.285 [2024-11-04 12:33:23.727947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.285 [2024-11-04 12:33:23.727954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.285 [2024-11-04 12:33:23.731436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.285 [2024-11-04 12:33:23.740684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.285 [2024-11-04 12:33:23.741290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.285 [2024-11-04 12:33:23.741332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.285 [2024-11-04 12:33:23.741343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.285 [2024-11-04 12:33:23.741578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.285 [2024-11-04 12:33:23.741805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.285 [2024-11-04 12:33:23.741815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.285 [2024-11-04 12:33:23.741822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.285 [2024-11-04 12:33:23.745306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.285 [2024-11-04 12:33:23.754558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.285 [2024-11-04 12:33:23.755228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.285 [2024-11-04 12:33:23.755265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.285 [2024-11-04 12:33:23.755276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.285 [2024-11-04 12:33:23.755511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.285 [2024-11-04 12:33:23.755730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.285 [2024-11-04 12:33:23.755738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.285 [2024-11-04 12:33:23.755756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.285 [2024-11-04 12:33:23.759239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.285 [2024-11-04 12:33:23.768280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.285 [2024-11-04 12:33:23.768866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.285 [2024-11-04 12:33:23.768903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.285 [2024-11-04 12:33:23.768915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.285 [2024-11-04 12:33:23.769153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.285 [2024-11-04 12:33:23.769372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.285 [2024-11-04 12:33:23.769380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.285 [2024-11-04 12:33:23.769388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.285 [2024-11-04 12:33:23.772881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.285 [2024-11-04 12:33:23.782124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.285 [2024-11-04 12:33:23.782797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.285 [2024-11-04 12:33:23.782835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.286 [2024-11-04 12:33:23.782848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.286 [2024-11-04 12:33:23.783084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.286 [2024-11-04 12:33:23.783307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.286 [2024-11-04 12:33:23.783316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.286 [2024-11-04 12:33:23.783323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.286 [2024-11-04 12:33:23.786818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.286 [2024-11-04 12:33:23.795866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.286 [2024-11-04 12:33:23.796509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.286 [2024-11-04 12:33:23.796547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.286 [2024-11-04 12:33:23.796558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.286 [2024-11-04 12:33:23.796802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.286 [2024-11-04 12:33:23.797021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.286 [2024-11-04 12:33:23.797030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.286 [2024-11-04 12:33:23.797037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.286 [2024-11-04 12:33:23.800518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.286 [2024-11-04 12:33:23.809762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.286 [2024-11-04 12:33:23.810431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.286 [2024-11-04 12:33:23.810469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.286 [2024-11-04 12:33:23.810480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.286 [2024-11-04 12:33:23.810715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.286 [2024-11-04 12:33:23.810944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.286 [2024-11-04 12:33:23.810954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.286 [2024-11-04 12:33:23.810961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.286 [2024-11-04 12:33:23.814445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.286 [2024-11-04 12:33:23.823484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.286 [2024-11-04 12:33:23.824024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.286 [2024-11-04 12:33:23.824043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.286 [2024-11-04 12:33:23.824051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.286 [2024-11-04 12:33:23.824268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.286 [2024-11-04 12:33:23.824483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.286 [2024-11-04 12:33:23.824491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.286 [2024-11-04 12:33:23.824498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.286 [2024-11-04 12:33:23.827992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.286 [2024-11-04 12:33:23.837232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.286 [2024-11-04 12:33:23.837797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.286 [2024-11-04 12:33:23.837814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.286 [2024-11-04 12:33:23.837821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.286 [2024-11-04 12:33:23.838036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.286 [2024-11-04 12:33:23.838251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.286 [2024-11-04 12:33:23.838258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.286 [2024-11-04 12:33:23.838266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.286 [2024-11-04 12:33:23.841742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.548 [2024-11-04 12:33:23.850997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.548 [2024-11-04 12:33:23.851650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.548 [2024-11-04 12:33:23.851687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.548 [2024-11-04 12:33:23.851700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.548 [2024-11-04 12:33:23.851947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.548 [2024-11-04 12:33:23.852166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.548 [2024-11-04 12:33:23.852175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.548 [2024-11-04 12:33:23.852183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.548 [2024-11-04 12:33:23.855665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.548 [2024-11-04 12:33:23.864911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.548 [2024-11-04 12:33:23.865555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.548 [2024-11-04 12:33:23.865593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.548 [2024-11-04 12:33:23.865604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.548 [2024-11-04 12:33:23.865847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.548 [2024-11-04 12:33:23.866067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.548 [2024-11-04 12:33:23.866076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.548 [2024-11-04 12:33:23.866083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.548 [2024-11-04 12:33:23.869566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.549 [2024-11-04 12:33:23.878815] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.549 [2024-11-04 12:33:23.879473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.549 [2024-11-04 12:33:23.879511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.549 [2024-11-04 12:33:23.879526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.549 [2024-11-04 12:33:23.879769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.549 [2024-11-04 12:33:23.879989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.549 [2024-11-04 12:33:23.879997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.549 [2024-11-04 12:33:23.880004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.549 [2024-11-04 12:33:23.883487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.549 [2024-11-04 12:33:23.892541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.549 [2024-11-04 12:33:23.893191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.549 [2024-11-04 12:33:23.893229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.549 [2024-11-04 12:33:23.893240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.549 [2024-11-04 12:33:23.893474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.549 [2024-11-04 12:33:23.893693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.549 [2024-11-04 12:33:23.893702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.549 [2024-11-04 12:33:23.893710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.549 [2024-11-04 12:33:23.897211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.549 [2024-11-04 12:33:23.906459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.549 [2024-11-04 12:33:23.907126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.549 [2024-11-04 12:33:23.907163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.549 [2024-11-04 12:33:23.907174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.549 [2024-11-04 12:33:23.907409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.549 [2024-11-04 12:33:23.907628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.549 [2024-11-04 12:33:23.907636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.549 [2024-11-04 12:33:23.907644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.549 [2024-11-04 12:33:23.911136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.549 [2024-11-04 12:33:23.920380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.549 [2024-11-04 12:33:23.921058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.549 [2024-11-04 12:33:23.921096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.549 [2024-11-04 12:33:23.921107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.549 [2024-11-04 12:33:23.921342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.549 [2024-11-04 12:33:23.921561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.549 [2024-11-04 12:33:23.921574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.549 [2024-11-04 12:33:23.921582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.549 [2024-11-04 12:33:23.925074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.549 [2024-11-04 12:33:23.934116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.549 [2024-11-04 12:33:23.934801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.549 [2024-11-04 12:33:23.934839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.549 [2024-11-04 12:33:23.934850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.549 [2024-11-04 12:33:23.935084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.549 [2024-11-04 12:33:23.935303] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.549 [2024-11-04 12:33:23.935312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.549 [2024-11-04 12:33:23.935319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.549 [2024-11-04 12:33:23.938810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.549 [2024-11-04 12:33:23.947858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.549 [2024-11-04 12:33:23.948525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.549 [2024-11-04 12:33:23.948563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.549 [2024-11-04 12:33:23.948574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.549 [2024-11-04 12:33:23.948817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.549 [2024-11-04 12:33:23.949037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.549 [2024-11-04 12:33:23.949045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.549 [2024-11-04 12:33:23.949053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.549 [2024-11-04 12:33:23.952540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.549 [2024-11-04 12:33:23.961585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.549 [2024-11-04 12:33:23.962254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.549 [2024-11-04 12:33:23.962291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.549 [2024-11-04 12:33:23.962302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.549 [2024-11-04 12:33:23.962537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.549 [2024-11-04 12:33:23.962764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.549 [2024-11-04 12:33:23.962774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.549 [2024-11-04 12:33:23.962782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.549 [2024-11-04 12:33:23.966271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.549 [2024-11-04 12:33:23.975317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.549 [2024-11-04 12:33:23.975884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.549 [2024-11-04 12:33:23.975922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.549 [2024-11-04 12:33:23.975934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.549 [2024-11-04 12:33:23.976172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.549 [2024-11-04 12:33:23.976390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.549 [2024-11-04 12:33:23.976399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.549 [2024-11-04 12:33:23.976406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.549 [2024-11-04 12:33:23.979896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.549 [2024-11-04 12:33:23.989134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.549 [2024-11-04 12:33:23.989711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.549 [2024-11-04 12:33:23.989729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.549 [2024-11-04 12:33:23.989737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.549 [2024-11-04 12:33:23.989959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.549 [2024-11-04 12:33:23.990175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.549 [2024-11-04 12:33:23.990182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.549 [2024-11-04 12:33:23.990189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.549 [2024-11-04 12:33:23.993665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.549 [2024-11-04 12:33:24.002912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.549 [2024-11-04 12:33:24.003434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.549 [2024-11-04 12:33:24.003450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.549 [2024-11-04 12:33:24.003457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.549 [2024-11-04 12:33:24.003671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.549 [2024-11-04 12:33:24.003891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.549 [2024-11-04 12:33:24.003900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.550 [2024-11-04 12:33:24.003907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.550 [2024-11-04 12:33:24.007382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.550 [2024-11-04 12:33:24.016822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.550 [2024-11-04 12:33:24.017332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.550 [2024-11-04 12:33:24.017349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.550 [2024-11-04 12:33:24.017360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.550 [2024-11-04 12:33:24.017575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.550 [2024-11-04 12:33:24.017797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.550 [2024-11-04 12:33:24.017806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.550 [2024-11-04 12:33:24.017813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.550 [2024-11-04 12:33:24.021302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.550 [2024-11-04 12:33:24.030548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.550 [2024-11-04 12:33:24.031203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.550 [2024-11-04 12:33:24.031241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.550 [2024-11-04 12:33:24.031252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.550 [2024-11-04 12:33:24.031486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.550 [2024-11-04 12:33:24.031705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.550 [2024-11-04 12:33:24.031714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.550 [2024-11-04 12:33:24.031722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.550 [2024-11-04 12:33:24.035218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.550 [2024-11-04 12:33:24.044466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.550 [2024-11-04 12:33:24.045008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.550 [2024-11-04 12:33:24.045028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.550 [2024-11-04 12:33:24.045036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.550 [2024-11-04 12:33:24.045252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.550 [2024-11-04 12:33:24.045467] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.550 [2024-11-04 12:33:24.045475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.550 [2024-11-04 12:33:24.045483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.550 [2024-11-04 12:33:24.048980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.550 [2024-11-04 12:33:24.058225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.550 [2024-11-04 12:33:24.058735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.550 [2024-11-04 12:33:24.058757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.550 [2024-11-04 12:33:24.058766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.550 [2024-11-04 12:33:24.058981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.550 [2024-11-04 12:33:24.059196] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.550 [2024-11-04 12:33:24.059209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.550 [2024-11-04 12:33:24.059216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.550 [2024-11-04 12:33:24.062695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.550 [2024-11-04 12:33:24.072152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.550 [2024-11-04 12:33:24.072711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.550 [2024-11-04 12:33:24.072727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.550 [2024-11-04 12:33:24.072735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.550 [2024-11-04 12:33:24.072956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.550 [2024-11-04 12:33:24.073171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.550 [2024-11-04 12:33:24.073179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.550 [2024-11-04 12:33:24.073186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.550 [2024-11-04 12:33:24.076663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.550 [2024-11-04 12:33:24.086027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.550 [2024-11-04 12:33:24.086581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.550 [2024-11-04 12:33:24.086598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.550 [2024-11-04 12:33:24.086605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.550 [2024-11-04 12:33:24.086826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.550 [2024-11-04 12:33:24.087043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.550 [2024-11-04 12:33:24.087051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.550 [2024-11-04 12:33:24.087058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.550 [2024-11-04 12:33:24.090537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.550 [2024-11-04 12:33:24.099792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.550 [2024-11-04 12:33:24.100324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.550 [2024-11-04 12:33:24.100340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.550 [2024-11-04 12:33:24.100349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.550 [2024-11-04 12:33:24.100564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.550 [2024-11-04 12:33:24.100784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.550 [2024-11-04 12:33:24.100792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.550 [2024-11-04 12:33:24.100799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.550 [2024-11-04 12:33:24.104275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.550 [2024-11-04 12:33:24.113649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.550 [2024-11-04 12:33:24.114192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.550 [2024-11-04 12:33:24.114214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.550 [2024-11-04 12:33:24.114222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.550 [2024-11-04 12:33:24.114441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.550 [2024-11-04 12:33:24.114659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.550 [2024-11-04 12:33:24.114668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.550 [2024-11-04 12:33:24.114675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.813 [2024-11-04 12:33:24.118241] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.813 [2024-11-04 12:33:24.127579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.813 [2024-11-04 12:33:24.128103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.813 [2024-11-04 12:33:24.128120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.813 [2024-11-04 12:33:24.128128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.813 [2024-11-04 12:33:24.128343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.813 [2024-11-04 12:33:24.128558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.813 [2024-11-04 12:33:24.128566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.813 [2024-11-04 12:33:24.128573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.813 [2024-11-04 12:33:24.132055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.813 [2024-11-04 12:33:24.141501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.813 [2024-11-04 12:33:24.142056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.813 [2024-11-04 12:33:24.142073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.813 [2024-11-04 12:33:24.142080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.813 [2024-11-04 12:33:24.142295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.813 [2024-11-04 12:33:24.142510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.813 [2024-11-04 12:33:24.142518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.813 [2024-11-04 12:33:24.142526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.813 [2024-11-04 12:33:24.146009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.813 [2024-11-04 12:33:24.155260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.813 [2024-11-04 12:33:24.155711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.813 [2024-11-04 12:33:24.155727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.813 [2024-11-04 12:33:24.155735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.813 [2024-11-04 12:33:24.155960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.813 [2024-11-04 12:33:24.156176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.813 [2024-11-04 12:33:24.156184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.813 [2024-11-04 12:33:24.156191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.813 [2024-11-04 12:33:24.159665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.813 [2024-11-04 12:33:24.169110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.813 [2024-11-04 12:33:24.169677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.813 [2024-11-04 12:33:24.169693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.813 [2024-11-04 12:33:24.169700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.813 [2024-11-04 12:33:24.169921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.813 [2024-11-04 12:33:24.170137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.813 [2024-11-04 12:33:24.170145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.813 [2024-11-04 12:33:24.170152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.813 [2024-11-04 12:33:24.173628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.813 [2024-11-04 12:33:24.182868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.813 [2024-11-04 12:33:24.183405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.813 [2024-11-04 12:33:24.183420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.813 [2024-11-04 12:33:24.183427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.813 [2024-11-04 12:33:24.183642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.813 [2024-11-04 12:33:24.183861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.813 [2024-11-04 12:33:24.183870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.813 [2024-11-04 12:33:24.183877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.813 [2024-11-04 12:33:24.187355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.813 [2024-11-04 12:33:24.196597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.814 [2024-11-04 12:33:24.197165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.814 [2024-11-04 12:33:24.197181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.814 [2024-11-04 12:33:24.197189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.814 [2024-11-04 12:33:24.197403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.814 [2024-11-04 12:33:24.197618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.814 [2024-11-04 12:33:24.197626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.814 [2024-11-04 12:33:24.197637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.814 [2024-11-04 12:33:24.201116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.814 [2024-11-04 12:33:24.210356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.814 [2024-11-04 12:33:24.210808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.814 [2024-11-04 12:33:24.210824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.814 [2024-11-04 12:33:24.210832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.814 [2024-11-04 12:33:24.211047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.814 [2024-11-04 12:33:24.211262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.814 [2024-11-04 12:33:24.211270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.814 [2024-11-04 12:33:24.211277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.814 [2024-11-04 12:33:24.214756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.814 [2024-11-04 12:33:24.224198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.814 [2024-11-04 12:33:24.224868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.814 [2024-11-04 12:33:24.224906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.814 [2024-11-04 12:33:24.224919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.814 [2024-11-04 12:33:24.225156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.814 [2024-11-04 12:33:24.225375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.814 [2024-11-04 12:33:24.225384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.814 [2024-11-04 12:33:24.225393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.814 [2024-11-04 12:33:24.228888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.814 [2024-11-04 12:33:24.237926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.814 [2024-11-04 12:33:24.238347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.814 [2024-11-04 12:33:24.238369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.814 [2024-11-04 12:33:24.238377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.814 [2024-11-04 12:33:24.238595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.814 [2024-11-04 12:33:24.238819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.814 [2024-11-04 12:33:24.238828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.814 [2024-11-04 12:33:24.238835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.814 [2024-11-04 12:33:24.242317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.814 [2024-11-04 12:33:24.251780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.814 [2024-11-04 12:33:24.252394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.814 [2024-11-04 12:33:24.252437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.814 [2024-11-04 12:33:24.252448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.814 [2024-11-04 12:33:24.252683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.814 [2024-11-04 12:33:24.252910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.814 [2024-11-04 12:33:24.252920] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.814 [2024-11-04 12:33:24.252928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.814 [2024-11-04 12:33:24.256415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.814 [2024-11-04 12:33:24.265666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.814 [2024-11-04 12:33:24.266339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.814 [2024-11-04 12:33:24.266376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.814 [2024-11-04 12:33:24.266387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.814 [2024-11-04 12:33:24.266622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.814 [2024-11-04 12:33:24.266849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.814 [2024-11-04 12:33:24.266859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.814 [2024-11-04 12:33:24.266867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.814 [2024-11-04 12:33:24.270352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.814 [2024-11-04 12:33:24.279402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.814 [2024-11-04 12:33:24.279952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.814 [2024-11-04 12:33:24.279972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.814 [2024-11-04 12:33:24.279980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.814 [2024-11-04 12:33:24.280196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.814 [2024-11-04 12:33:24.280411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.814 [2024-11-04 12:33:24.280419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.814 [2024-11-04 12:33:24.280426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.814 [2024-11-04 12:33:24.283909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.814 [2024-11-04 12:33:24.293150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.814 [2024-11-04 12:33:24.293593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.814 [2024-11-04 12:33:24.293609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.814 [2024-11-04 12:33:24.293616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.814 [2024-11-04 12:33:24.293836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.814 [2024-11-04 12:33:24.294056] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.814 [2024-11-04 12:33:24.294064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.814 [2024-11-04 12:33:24.294071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.814 [2024-11-04 12:33:24.297561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.814 [2024-11-04 12:33:24.307012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.814 [2024-11-04 12:33:24.307578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.814 [2024-11-04 12:33:24.307594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.814 [2024-11-04 12:33:24.307601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.814 [2024-11-04 12:33:24.307821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.814 [2024-11-04 12:33:24.308037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.814 [2024-11-04 12:33:24.308045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.814 [2024-11-04 12:33:24.308052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.814 [2024-11-04 12:33:24.311527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.814 [2024-11-04 12:33:24.320771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.814 [2024-11-04 12:33:24.321292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.814 [2024-11-04 12:33:24.321307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.814 [2024-11-04 12:33:24.321314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.814 [2024-11-04 12:33:24.321529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.814 [2024-11-04 12:33:24.321744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.814 [2024-11-04 12:33:24.321758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.814 [2024-11-04 12:33:24.321765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.814 [2024-11-04 12:33:24.325245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.814 [2024-11-04 12:33:24.334488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.814 [2024-11-04 12:33:24.335029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.814 [2024-11-04 12:33:24.335067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.815 [2024-11-04 12:33:24.335080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.815 [2024-11-04 12:33:24.335315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.815 [2024-11-04 12:33:24.335535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.815 [2024-11-04 12:33:24.335543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.815 [2024-11-04 12:33:24.335551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.815 [2024-11-04 12:33:24.339047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1820781 Killed "${NVMF_APP[@]}" "$@" 00:28:49.815 12:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:49.815 12:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:49.815 12:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:49.815 12:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:49.815 12:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.815 [2024-11-04 12:33:24.348302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.815 [2024-11-04 12:33:24.348873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.815 [2024-11-04 12:33:24.348911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.815 [2024-11-04 12:33:24.348923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.815 [2024-11-04 12:33:24.349159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.815 [2024-11-04 12:33:24.349388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.815 [2024-11-04 12:33:24.349398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.815 [2024-11-04 12:33:24.349406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.815 12:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1822404 00:28:49.815 12:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1822404 00:28:49.815 12:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:49.815 12:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1822404 ']' 00:28:49.815 12:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.815 12:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:49.815 12:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.815 12:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:49.815 12:33:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.815 [2024-11-04 12:33:24.352899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.815 [2024-11-04 12:33:24.362149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.815 [2024-11-04 12:33:24.362826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.815 [2024-11-04 12:33:24.362864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.815 [2024-11-04 12:33:24.362877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.815 [2024-11-04 12:33:24.363115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.815 [2024-11-04 12:33:24.363335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.815 [2024-11-04 12:33:24.363343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.815 [2024-11-04 12:33:24.363352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.815 [2024-11-04 12:33:24.366853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.815 [2024-11-04 12:33:24.375904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.815 [2024-11-04 12:33:24.376433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.815 [2024-11-04 12:33:24.376451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:49.815 [2024-11-04 12:33:24.376460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:49.815 [2024-11-04 12:33:24.376676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:49.815 [2024-11-04 12:33:24.376898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.815 [2024-11-04 12:33:24.376907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.815 [2024-11-04 12:33:24.376914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.078 [2024-11-04 12:33:24.380397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.078 [2024-11-04 12:33:24.389650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.078 [2024-11-04 12:33:24.390062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.078 [2024-11-04 12:33:24.390081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.078 [2024-11-04 12:33:24.390089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.078 [2024-11-04 12:33:24.390304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.078 [2024-11-04 12:33:24.390519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.078 [2024-11-04 12:33:24.390527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.078 [2024-11-04 12:33:24.390534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.078 [2024-11-04 12:33:24.394020] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.078 [2024-11-04 12:33:24.401385] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:28:50.078 [2024-11-04 12:33:24.401431] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.078 [2024-11-04 12:33:24.403482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.078 [2024-11-04 12:33:24.404026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.078 [2024-11-04 12:33:24.404043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.078 [2024-11-04 12:33:24.404051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.078 [2024-11-04 12:33:24.404268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.078 [2024-11-04 12:33:24.404483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.078 [2024-11-04 12:33:24.404491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.078 [2024-11-04 12:33:24.404499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.078 [2024-11-04 12:33:24.407986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.078 [2024-11-04 12:33:24.417226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.078 [2024-11-04 12:33:24.417633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.078 [2024-11-04 12:33:24.417650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.078 [2024-11-04 12:33:24.417658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.078 [2024-11-04 12:33:24.417878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.078 [2024-11-04 12:33:24.418094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.078 [2024-11-04 12:33:24.418102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.078 [2024-11-04 12:33:24.418110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.078 [2024-11-04 12:33:24.421589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.078 [2024-11-04 12:33:24.431124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.078 [2024-11-04 12:33:24.431823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.078 [2024-11-04 12:33:24.431861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.078 [2024-11-04 12:33:24.431872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.078 [2024-11-04 12:33:24.432108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.078 [2024-11-04 12:33:24.432326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.078 [2024-11-04 12:33:24.432335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.078 [2024-11-04 12:33:24.432342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.078 [2024-11-04 12:33:24.435836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.078 [2024-11-04 12:33:24.444887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.078 [2024-11-04 12:33:24.445566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.078 [2024-11-04 12:33:24.445604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.078 [2024-11-04 12:33:24.445615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.078 [2024-11-04 12:33:24.445858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.078 [2024-11-04 12:33:24.446078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.078 [2024-11-04 12:33:24.446086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.078 [2024-11-04 12:33:24.446094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.078 [2024-11-04 12:33:24.449591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.078 [2024-11-04 12:33:24.458641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.078 [2024-11-04 12:33:24.459286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.078 [2024-11-04 12:33:24.459323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.078 [2024-11-04 12:33:24.459339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.078 [2024-11-04 12:33:24.459574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.078 [2024-11-04 12:33:24.459801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.078 [2024-11-04 12:33:24.459811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.078 [2024-11-04 12:33:24.459818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.078 [2024-11-04 12:33:24.463306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.078 [2024-11-04 12:33:24.472557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.078 [2024-11-04 12:33:24.473245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.078 [2024-11-04 12:33:24.473283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.078 [2024-11-04 12:33:24.473294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.078 [2024-11-04 12:33:24.473529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.078 [2024-11-04 12:33:24.473756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.078 [2024-11-04 12:33:24.473766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.078 [2024-11-04 12:33:24.473774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.078 [2024-11-04 12:33:24.477257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.078 [2024-11-04 12:33:24.481853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:50.078 [2024-11-04 12:33:24.486308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.078 [2024-11-04 12:33:24.486913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.078 [2024-11-04 12:33:24.486933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.078 [2024-11-04 12:33:24.486942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.078 [2024-11-04 12:33:24.487158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.078 [2024-11-04 12:33:24.487374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.078 [2024-11-04 12:33:24.487382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.078 [2024-11-04 12:33:24.487390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.078 [2024-11-04 12:33:24.490878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.079 [2024-11-04 12:33:24.500138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.079 [2024-11-04 12:33:24.500673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.079 [2024-11-04 12:33:24.500690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.079 [2024-11-04 12:33:24.500698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.079 [2024-11-04 12:33:24.500918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.079 [2024-11-04 12:33:24.501139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.079 [2024-11-04 12:33:24.501148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.079 [2024-11-04 12:33:24.501155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.079 [2024-11-04 12:33:24.504637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.079 [2024-11-04 12:33:24.511106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.079 [2024-11-04 12:33:24.511130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.079 [2024-11-04 12:33:24.511137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.079 [2024-11-04 12:33:24.511142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.079 [2024-11-04 12:33:24.511147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.079 [2024-11-04 12:33:24.512222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.079 [2024-11-04 12:33:24.512384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.079 [2024-11-04 12:33:24.512386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.079 [2024-11-04 12:33:24.513887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.079 [2024-11-04 12:33:24.514417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.079 [2024-11-04 12:33:24.514433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.079 [2024-11-04 12:33:24.514441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.079 [2024-11-04 12:33:24.514658] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.079 [2024-11-04 12:33:24.514878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.079 [2024-11-04 12:33:24.514887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.079 [2024-11-04 12:33:24.514894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.079 [2024-11-04 12:33:24.518377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.079 [2024-11-04 12:33:24.527621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.079 [2024-11-04 12:33:24.528180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.079 [2024-11-04 12:33:24.528222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.079 [2024-11-04 12:33:24.528234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.079 [2024-11-04 12:33:24.528475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.079 [2024-11-04 12:33:24.528694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.079 [2024-11-04 12:33:24.528703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.079 [2024-11-04 12:33:24.528711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.079 [2024-11-04 12:33:24.532273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.079 [2024-11-04 12:33:24.541530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.079 [2024-11-04 12:33:24.542047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.079 [2024-11-04 12:33:24.542093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.079 [2024-11-04 12:33:24.542104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.079 [2024-11-04 12:33:24.542342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.079 [2024-11-04 12:33:24.542561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.079 [2024-11-04 12:33:24.542571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.079 [2024-11-04 12:33:24.542578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.079 [2024-11-04 12:33:24.546072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.079 [2024-11-04 12:33:24.555346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.079 [2024-11-04 12:33:24.556026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.079 [2024-11-04 12:33:24.556065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.079 [2024-11-04 12:33:24.556077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.079 [2024-11-04 12:33:24.556313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.079 [2024-11-04 12:33:24.556532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.079 [2024-11-04 12:33:24.556540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.079 [2024-11-04 12:33:24.556548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.079 [2024-11-04 12:33:24.560042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.079 [2024-11-04 12:33:24.569085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.079 [2024-11-04 12:33:24.569691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.079 [2024-11-04 12:33:24.569728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.079 [2024-11-04 12:33:24.569740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.079 [2024-11-04 12:33:24.569983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.079 [2024-11-04 12:33:24.570203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.079 [2024-11-04 12:33:24.570211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.079 [2024-11-04 12:33:24.570220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.079 [2024-11-04 12:33:24.573705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.079 [2024-11-04 12:33:24.582957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.079 [2024-11-04 12:33:24.583469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.079 [2024-11-04 12:33:24.583506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.079 [2024-11-04 12:33:24.583518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.079 [2024-11-04 12:33:24.583765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.079 [2024-11-04 12:33:24.583989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.079 [2024-11-04 12:33:24.583998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.079 [2024-11-04 12:33:24.584006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.079 [2024-11-04 12:33:24.587489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.079 [2024-11-04 12:33:24.596750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.079 [2024-11-04 12:33:24.597352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.079 [2024-11-04 12:33:24.597390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.079 [2024-11-04 12:33:24.597401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.079 [2024-11-04 12:33:24.597635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.079 [2024-11-04 12:33:24.597863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.079 [2024-11-04 12:33:24.597873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.079 [2024-11-04 12:33:24.597880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.079 [2024-11-04 12:33:24.601367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.079 [2024-11-04 12:33:24.610618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.079 [2024-11-04 12:33:24.611070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.079 [2024-11-04 12:33:24.611089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.079 [2024-11-04 12:33:24.611098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.079 [2024-11-04 12:33:24.611313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.079 [2024-11-04 12:33:24.611529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.079 [2024-11-04 12:33:24.611537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.079 [2024-11-04 12:33:24.611544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.079 [2024-11-04 12:33:24.615027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.079 [2024-11-04 12:33:24.624475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.079 [2024-11-04 12:33:24.625051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.079 [2024-11-04 12:33:24.625090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.080 [2024-11-04 12:33:24.625101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.080 [2024-11-04 12:33:24.625335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.080 [2024-11-04 12:33:24.625554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.080 [2024-11-04 12:33:24.625563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.080 [2024-11-04 12:33:24.625571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.080 [2024-11-04 12:33:24.629069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.080 [2024-11-04 12:33:24.638316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.080 [2024-11-04 12:33:24.638774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.080 [2024-11-04 12:33:24.638813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.080 [2024-11-04 12:33:24.638825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.080 [2024-11-04 12:33:24.639060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.080 [2024-11-04 12:33:24.639279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.080 [2024-11-04 12:33:24.639287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.080 [2024-11-04 12:33:24.639295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.080 [2024-11-04 12:33:24.642787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.342 [2024-11-04 12:33:24.652249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.342 [2024-11-04 12:33:24.652652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.342 [2024-11-04 12:33:24.652672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.342 [2024-11-04 12:33:24.652680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.342 [2024-11-04 12:33:24.652902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.342 [2024-11-04 12:33:24.653118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.342 [2024-11-04 12:33:24.653134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.342 [2024-11-04 12:33:24.653142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.342 [2024-11-04 12:33:24.656624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.342 [2024-11-04 12:33:24.666070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.342 [2024-11-04 12:33:24.666564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.342 [2024-11-04 12:33:24.666602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.342 [2024-11-04 12:33:24.666614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.342 [2024-11-04 12:33:24.666857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.342 [2024-11-04 12:33:24.667076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.342 [2024-11-04 12:33:24.667085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.342 [2024-11-04 12:33:24.667093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.343 [2024-11-04 12:33:24.670578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.343 [2024-11-04 12:33:24.679828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.343 [2024-11-04 12:33:24.680510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.343 [2024-11-04 12:33:24.680547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.343 [2024-11-04 12:33:24.680563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.343 [2024-11-04 12:33:24.680806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.343 [2024-11-04 12:33:24.681026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.343 [2024-11-04 12:33:24.681034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.343 [2024-11-04 12:33:24.681042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.343 [2024-11-04 12:33:24.684525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.343 5000.50 IOPS, 19.53 MiB/s [2024-11-04T11:33:24.913Z] [2024-11-04 12:33:24.695229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.343 [2024-11-04 12:33:24.695793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.343 [2024-11-04 12:33:24.695820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.343 [2024-11-04 12:33:24.695828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.343 [2024-11-04 12:33:24.696048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.343 [2024-11-04 12:33:24.696264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.343 [2024-11-04 12:33:24.696272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.343 [2024-11-04 12:33:24.696279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.343 [2024-11-04 12:33:24.699777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.343 [2024-11-04 12:33:24.709079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.343 [2024-11-04 12:33:24.709467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.343 [2024-11-04 12:33:24.709484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.343 [2024-11-04 12:33:24.709492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.343 [2024-11-04 12:33:24.709707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.343 [2024-11-04 12:33:24.709927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.343 [2024-11-04 12:33:24.709936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.343 [2024-11-04 12:33:24.709943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.343 [2024-11-04 12:33:24.713422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.343 [2024-11-04 12:33:24.722872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.343 [2024-11-04 12:33:24.723461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.343 [2024-11-04 12:33:24.723499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.343 [2024-11-04 12:33:24.723511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.343 [2024-11-04 12:33:24.723754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.343 [2024-11-04 12:33:24.723974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.343 [2024-11-04 12:33:24.723987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.343 [2024-11-04 12:33:24.723995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.343 [2024-11-04 12:33:24.727480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.343 [2024-11-04 12:33:24.736728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.343 [2024-11-04 12:33:24.737328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.343 [2024-11-04 12:33:24.737366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.343 [2024-11-04 12:33:24.737378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.343 [2024-11-04 12:33:24.737612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.343 [2024-11-04 12:33:24.737839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.343 [2024-11-04 12:33:24.737848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.343 [2024-11-04 12:33:24.737856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.343 [2024-11-04 12:33:24.741339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.343 [2024-11-04 12:33:24.750587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.343 [2024-11-04 12:33:24.751132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.343 [2024-11-04 12:33:24.751168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.343 [2024-11-04 12:33:24.751181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.343 [2024-11-04 12:33:24.751415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.343 [2024-11-04 12:33:24.751634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.343 [2024-11-04 12:33:24.751643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.343 [2024-11-04 12:33:24.751651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.343 [2024-11-04 12:33:24.755142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.343 [2024-11-04 12:33:24.764392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.343 [2024-11-04 12:33:24.764885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.343 [2024-11-04 12:33:24.764923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.343 [2024-11-04 12:33:24.764934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.343 [2024-11-04 12:33:24.765168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.343 [2024-11-04 12:33:24.765387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.343 [2024-11-04 12:33:24.765396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.343 [2024-11-04 12:33:24.765403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.343 [2024-11-04 12:33:24.768895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.343 [2024-11-04 12:33:24.778150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.343 [2024-11-04 12:33:24.778798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.343 [2024-11-04 12:33:24.778837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.343 [2024-11-04 12:33:24.778848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.343 [2024-11-04 12:33:24.779083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.343 [2024-11-04 12:33:24.779302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.343 [2024-11-04 12:33:24.779310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.343 [2024-11-04 12:33:24.779318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.343 [2024-11-04 12:33:24.782809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.343 [2024-11-04 12:33:24.792055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.343 [2024-11-04 12:33:24.792708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.343 [2024-11-04 12:33:24.792753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.343 [2024-11-04 12:33:24.792764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.343 [2024-11-04 12:33:24.792999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.343 [2024-11-04 12:33:24.793219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.343 [2024-11-04 12:33:24.793227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.343 [2024-11-04 12:33:24.793235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.343 [2024-11-04 12:33:24.796716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.343 [2024-11-04 12:33:24.805969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.343 [2024-11-04 12:33:24.806585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.343 [2024-11-04 12:33:24.806623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.343 [2024-11-04 12:33:24.806635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.343 [2024-11-04 12:33:24.806876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.343 [2024-11-04 12:33:24.807096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.343 [2024-11-04 12:33:24.807105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.343 [2024-11-04 12:33:24.807112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.343 [2024-11-04 12:33:24.810593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.344 [2024-11-04 12:33:24.819837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.344 [2024-11-04 12:33:24.820393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.344 [2024-11-04 12:33:24.820411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.344 [2024-11-04 12:33:24.820419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.344 [2024-11-04 12:33:24.820639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.344 [2024-11-04 12:33:24.820860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.344 [2024-11-04 12:33:24.820869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.344 [2024-11-04 12:33:24.820876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.344 [2024-11-04 12:33:24.824356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.344 [2024-11-04 12:33:24.833590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.344 [2024-11-04 12:33:24.834220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.344 [2024-11-04 12:33:24.834258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.344 [2024-11-04 12:33:24.834269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.344 [2024-11-04 12:33:24.834504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.344 [2024-11-04 12:33:24.834722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.344 [2024-11-04 12:33:24.834730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.344 [2024-11-04 12:33:24.834738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.344 [2024-11-04 12:33:24.838228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.344 [2024-11-04 12:33:24.847466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.344 [2024-11-04 12:33:24.848077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.344 [2024-11-04 12:33:24.848115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.344 [2024-11-04 12:33:24.848126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.344 [2024-11-04 12:33:24.848361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.344 [2024-11-04 12:33:24.848579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.344 [2024-11-04 12:33:24.848588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.344 [2024-11-04 12:33:24.848595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.344 [2024-11-04 12:33:24.852097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.344 [2024-11-04 12:33:24.861346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.344 [2024-11-04 12:33:24.861870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.344 [2024-11-04 12:33:24.861908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.344 [2024-11-04 12:33:24.861920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.344 [2024-11-04 12:33:24.862158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.344 [2024-11-04 12:33:24.862377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.344 [2024-11-04 12:33:24.862386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.344 [2024-11-04 12:33:24.862397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.344 [2024-11-04 12:33:24.865888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.344 [2024-11-04 12:33:24.875125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.344 [2024-11-04 12:33:24.875513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.344 [2024-11-04 12:33:24.875532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.344 [2024-11-04 12:33:24.875540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.344 [2024-11-04 12:33:24.875761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.344 [2024-11-04 12:33:24.875977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.344 [2024-11-04 12:33:24.875986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.344 [2024-11-04 12:33:24.875993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.344 [2024-11-04 12:33:24.879469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.344 [2024-11-04 12:33:24.888910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.344 [2024-11-04 12:33:24.889563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.344 [2024-11-04 12:33:24.889601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.344 [2024-11-04 12:33:24.889612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.344 [2024-11-04 12:33:24.889853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.344 [2024-11-04 12:33:24.890072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.344 [2024-11-04 12:33:24.890081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.344 [2024-11-04 12:33:24.890089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.344 [2024-11-04 12:33:24.893572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.344 [2024-11-04 12:33:24.902826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.344 [2024-11-04 12:33:24.903440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.344 [2024-11-04 12:33:24.903478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.344 [2024-11-04 12:33:24.903489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.344 [2024-11-04 12:33:24.903724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.344 [2024-11-04 12:33:24.903952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.344 [2024-11-04 12:33:24.903962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.344 [2024-11-04 12:33:24.903970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.344 [2024-11-04 12:33:24.907453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.606 [2024-11-04 12:33:24.916703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.606 [2024-11-04 12:33:24.917257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.606 [2024-11-04 12:33:24.917295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.606 [2024-11-04 12:33:24.917306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.606 [2024-11-04 12:33:24.917541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.606 [2024-11-04 12:33:24.917768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.606 [2024-11-04 12:33:24.917777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.606 [2024-11-04 12:33:24.917785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.606 [2024-11-04 12:33:24.921266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.606 [2024-11-04 12:33:24.930515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.606 [2024-11-04 12:33:24.931033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.606 [2024-11-04 12:33:24.931052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.606 [2024-11-04 12:33:24.931060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.606 [2024-11-04 12:33:24.931276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.606 [2024-11-04 12:33:24.931492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.606 [2024-11-04 12:33:24.931500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.606 [2024-11-04 12:33:24.931507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.606 [2024-11-04 12:33:24.934989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.606 [2024-11-04 12:33:24.944427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.606 [2024-11-04 12:33:24.944970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.606 [2024-11-04 12:33:24.944987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.606 [2024-11-04 12:33:24.944995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.606 [2024-11-04 12:33:24.945210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.606 [2024-11-04 12:33:24.945425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.606 [2024-11-04 12:33:24.945433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.606 [2024-11-04 12:33:24.945440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.606 [2024-11-04 12:33:24.948921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.606 [2024-11-04 12:33:24.958166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.606 [2024-11-04 12:33:24.958704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.606 [2024-11-04 12:33:24.958721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.606 [2024-11-04 12:33:24.958728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.606 [2024-11-04 12:33:24.958951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.606 [2024-11-04 12:33:24.959167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.606 [2024-11-04 12:33:24.959175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.606 [2024-11-04 12:33:24.959182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.606 [2024-11-04 12:33:24.962658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.606 [2024-11-04 12:33:24.971890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.606 [2024-11-04 12:33:24.972431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.606 [2024-11-04 12:33:24.972446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.607 [2024-11-04 12:33:24.972453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.607 [2024-11-04 12:33:24.972668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.607 [2024-11-04 12:33:24.972888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.607 [2024-11-04 12:33:24.972897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.607 [2024-11-04 12:33:24.972903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.607 [2024-11-04 12:33:24.976379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.607 [2024-11-04 12:33:24.985607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.607 [2024-11-04 12:33:24.986155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.607 [2024-11-04 12:33:24.986170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.607 [2024-11-04 12:33:24.986177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.607 [2024-11-04 12:33:24.986392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.607 [2024-11-04 12:33:24.986607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.607 [2024-11-04 12:33:24.986624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.607 [2024-11-04 12:33:24.986631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.607 [2024-11-04 12:33:24.990109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.607 [2024-11-04 12:33:24.999346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.607 [2024-11-04 12:33:25.000015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.607 [2024-11-04 12:33:25.000053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.607 [2024-11-04 12:33:25.000064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.607 [2024-11-04 12:33:25.000298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.607 [2024-11-04 12:33:25.000517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.607 [2024-11-04 12:33:25.000526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.607 [2024-11-04 12:33:25.000538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.607 [2024-11-04 12:33:25.004027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.607 [2024-11-04 12:33:25.013268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.607 [2024-11-04 12:33:25.013828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.607 [2024-11-04 12:33:25.013865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.607 [2024-11-04 12:33:25.013878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.607 [2024-11-04 12:33:25.014117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.607 [2024-11-04 12:33:25.014335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.607 [2024-11-04 12:33:25.014344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.607 [2024-11-04 12:33:25.014352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.607 [2024-11-04 12:33:25.017843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.607 [2024-11-04 12:33:25.027087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.607 [2024-11-04 12:33:25.027647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.607 [2024-11-04 12:33:25.027685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.607 [2024-11-04 12:33:25.027696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.607 [2024-11-04 12:33:25.027939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.607 [2024-11-04 12:33:25.028158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.607 [2024-11-04 12:33:25.028166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.607 [2024-11-04 12:33:25.028174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.607 [2024-11-04 12:33:25.031655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.607 [2024-11-04 12:33:25.040904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.607 [2024-11-04 12:33:25.041493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.607 [2024-11-04 12:33:25.041512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.607 [2024-11-04 12:33:25.041520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.607 [2024-11-04 12:33:25.041736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.607 [2024-11-04 12:33:25.041956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.607 [2024-11-04 12:33:25.041966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.607 [2024-11-04 12:33:25.041973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.607 [2024-11-04 12:33:25.045450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.607 [2024-11-04 12:33:25.054695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.607 [2024-11-04 12:33:25.055239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.607 [2024-11-04 12:33:25.055261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.607 [2024-11-04 12:33:25.055269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.607 [2024-11-04 12:33:25.055484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.607 [2024-11-04 12:33:25.055699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.607 [2024-11-04 12:33:25.055707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.607 [2024-11-04 12:33:25.055714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.607 [2024-11-04 12:33:25.059198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.607 [2024-11-04 12:33:25.068437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.607 [2024-11-04 12:33:25.069093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.607 [2024-11-04 12:33:25.069131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.607 [2024-11-04 12:33:25.069142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.607 [2024-11-04 12:33:25.069377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.607 [2024-11-04 12:33:25.069596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.607 [2024-11-04 12:33:25.069604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.607 [2024-11-04 12:33:25.069612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.607 [2024-11-04 12:33:25.073106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.607 [2024-11-04 12:33:25.082349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.607 [2024-11-04 12:33:25.082889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.607 [2024-11-04 12:33:25.082908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.607 [2024-11-04 12:33:25.082917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.607 [2024-11-04 12:33:25.083132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.607 [2024-11-04 12:33:25.083347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.607 [2024-11-04 12:33:25.083356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.607 [2024-11-04 12:33:25.083363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.607 [2024-11-04 12:33:25.086844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.607 [2024-11-04 12:33:25.096082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.607 [2024-11-04 12:33:25.096624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.607 [2024-11-04 12:33:25.096640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.607 [2024-11-04 12:33:25.096647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.608 [2024-11-04 12:33:25.096874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.608 [2024-11-04 12:33:25.097095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.608 [2024-11-04 12:33:25.097104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.608 [2024-11-04 12:33:25.097111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.608 [2024-11-04 12:33:25.100592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.608 [2024-11-04 12:33:25.109942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.608 [2024-11-04 12:33:25.110483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.608 [2024-11-04 12:33:25.110500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.608 [2024-11-04 12:33:25.110508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.608 [2024-11-04 12:33:25.110723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.608 [2024-11-04 12:33:25.110943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.608 [2024-11-04 12:33:25.110957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.608 [2024-11-04 12:33:25.110964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.608 [2024-11-04 12:33:25.114437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.608 [2024-11-04 12:33:25.123673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.608 [2024-11-04 12:33:25.124287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.608 [2024-11-04 12:33:25.124325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.608 [2024-11-04 12:33:25.124336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.608 [2024-11-04 12:33:25.124571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.608 [2024-11-04 12:33:25.124797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.608 [2024-11-04 12:33:25.124807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.608 [2024-11-04 12:33:25.124815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.608 [2024-11-04 12:33:25.128296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.608 [2024-11-04 12:33:25.137543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.608 [2024-11-04 12:33:25.138201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.608 [2024-11-04 12:33:25.138239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.608 [2024-11-04 12:33:25.138250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.608 [2024-11-04 12:33:25.138485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.608 [2024-11-04 12:33:25.138704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.608 [2024-11-04 12:33:25.138713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.608 [2024-11-04 12:33:25.138721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.608 [2024-11-04 12:33:25.142219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.608 [2024-11-04 12:33:25.151473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.608 [2024-11-04 12:33:25.152026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.608 [2024-11-04 12:33:25.152063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.608 [2024-11-04 12:33:25.152074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.608 [2024-11-04 12:33:25.152320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.608 [2024-11-04 12:33:25.152540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.608 [2024-11-04 12:33:25.152549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.608 [2024-11-04 12:33:25.152558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.608 [2024-11-04 12:33:25.156050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.608 [2024-11-04 12:33:25.165303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.608 [2024-11-04 12:33:25.165870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.608 [2024-11-04 12:33:25.165908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.608 [2024-11-04 12:33:25.165922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.608 [2024-11-04 12:33:25.166160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.608 [2024-11-04 12:33:25.166379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.608 [2024-11-04 12:33:25.166388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.608 [2024-11-04 12:33:25.166396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.608 [2024-11-04 12:33:25.169888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.870 [2024-11-04 12:33:25.179134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.870 [2024-11-04 12:33:25.179725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.870 [2024-11-04 12:33:25.179743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.870 [2024-11-04 12:33:25.179758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.870 [2024-11-04 12:33:25.179974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.870 [2024-11-04 12:33:25.180191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.870 [2024-11-04 12:33:25.180199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.870 [2024-11-04 12:33:25.180206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.870 [2024-11-04 12:33:25.183682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.870 [2024-11-04 12:33:25.192999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.870 [2024-11-04 12:33:25.193589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.870 [2024-11-04 12:33:25.193614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.870 [2024-11-04 12:33:25.193627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.870 [2024-11-04 12:33:25.193853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.870 [2024-11-04 12:33:25.194076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.870 [2024-11-04 12:33:25.194085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.870 [2024-11-04 12:33:25.194092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.870 [2024-11-04 12:33:25.197649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.870 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:50.870 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:50.870 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:50.870 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:50.870 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.870 [2024-11-04 12:33:25.206784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.870 [2024-11-04 12:33:25.207446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.870 [2024-11-04 12:33:25.207484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.870 [2024-11-04 12:33:25.207496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.870 [2024-11-04 12:33:25.207731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.870 [2024-11-04 12:33:25.207959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.870 [2024-11-04 12:33:25.207968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.870 [2024-11-04 12:33:25.207976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.870 [2024-11-04 12:33:25.211461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.870 [2024-11-04 12:33:25.220709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.870 [2024-11-04 12:33:25.221312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.870 [2024-11-04 12:33:25.221332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.870 [2024-11-04 12:33:25.221340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.870 [2024-11-04 12:33:25.221555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.870 [2024-11-04 12:33:25.221775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.870 [2024-11-04 12:33:25.221784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.870 [2024-11-04 12:33:25.221791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.870 [2024-11-04 12:33:25.225270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.870 [2024-11-04 12:33:25.234511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.870 [2024-11-04 12:33:25.235057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.870 [2024-11-04 12:33:25.235095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.870 [2024-11-04 12:33:25.235111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.870 [2024-11-04 12:33:25.235347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.870 [2024-11-04 12:33:25.235565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.870 [2024-11-04 12:33:25.235574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.870 [2024-11-04 12:33:25.235581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.870 [2024-11-04 12:33:25.239072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.870 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.870 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:50.870 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.870 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.870 [2024-11-04 12:33:25.243853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:50.870 [2024-11-04 12:33:25.248317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.870 [2024-11-04 12:33:25.248876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.870 [2024-11-04 12:33:25.248914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.870 [2024-11-04 12:33:25.248926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.870 [2024-11-04 12:33:25.249165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.870 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.870 [2024-11-04 12:33:25.249384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.870 [2024-11-04 12:33:25.249393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.870 [2024-11-04 12:33:25.249401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.870 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:50.870 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.870 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.870 [2024-11-04 12:33:25.252906] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.870 [2024-11-04 12:33:25.262156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.870 [2024-11-04 12:33:25.262810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.870 [2024-11-04 12:33:25.262848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.871 [2024-11-04 12:33:25.262860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.871 [2024-11-04 12:33:25.263097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.871 [2024-11-04 12:33:25.263315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.871 [2024-11-04 12:33:25.263324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.871 [2024-11-04 12:33:25.263332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.871 [2024-11-04 12:33:25.266832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.871 [2024-11-04 12:33:25.276075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.871 [2024-11-04 12:33:25.276625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.871 [2024-11-04 12:33:25.276644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.871 [2024-11-04 12:33:25.276653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.871 [2024-11-04 12:33:25.276874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.871 [2024-11-04 12:33:25.277090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.871 [2024-11-04 12:33:25.277098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.871 [2024-11-04 12:33:25.277105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.871 [2024-11-04 12:33:25.280583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.871 Malloc0 00:28:50.871 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.871 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:50.871 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.871 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.871 [2024-11-04 12:33:25.289820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.871 [2024-11-04 12:33:25.290318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.871 [2024-11-04 12:33:25.290334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.871 [2024-11-04 12:33:25.290341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.871 [2024-11-04 12:33:25.290557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.871 [2024-11-04 12:33:25.290777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.871 [2024-11-04 12:33:25.290785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.871 [2024-11-04 12:33:25.290793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.871 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.871 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:50.871 [2024-11-04 12:33:25.294268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.871 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.871 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.871 [2024-11-04 12:33:25.303551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.871 [2024-11-04 12:33:25.304190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.871 [2024-11-04 12:33:25.304228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211d0c0 with addr=10.0.0.2, port=4420 00:28:50.871 [2024-11-04 12:33:25.304240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d0c0 is same with the state(6) to be set 00:28:50.871 [2024-11-04 12:33:25.304475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0c0 (9): Bad file descriptor 00:28:50.871 [2024-11-04 12:33:25.304698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.871 [2024-11-04 12:33:25.304707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.871 [2024-11-04 12:33:25.304715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.871 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.871 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:50.871 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.871 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.871 [2024-11-04 12:33:25.308206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.871 [2024-11-04 12:33:25.312979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.871 [2024-11-04 12:33:25.317445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.871 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.871 12:33:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1821246 00:28:51.132 [2024-11-04 12:33:25.483342] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:52.518 4648.14 IOPS, 18.16 MiB/s [2024-11-04T11:33:28.029Z] 5477.62 IOPS, 21.40 MiB/s [2024-11-04T11:33:28.971Z] 6159.33 IOPS, 24.06 MiB/s [2024-11-04T11:33:29.911Z] 6653.00 IOPS, 25.99 MiB/s [2024-11-04T11:33:30.853Z] 7064.82 IOPS, 27.60 MiB/s [2024-11-04T11:33:31.794Z] 7407.25 IOPS, 28.93 MiB/s [2024-11-04T11:33:32.737Z] 7695.46 IOPS, 30.06 MiB/s [2024-11-04T11:33:34.122Z] 7938.64 IOPS, 31.01 MiB/s 00:28:59.552 Latency(us) 00:28:59.552 [2024-11-04T11:33:34.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.552 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:59.553 Verification LBA range: start 0x0 length 0x4000 00:28:59.553 Nvme1n1 : 15.01 8150.95 31.84 10298.37 0.00 6912.78 535.89 15510.19 00:28:59.553 [2024-11-04T11:33:34.123Z] =================================================================================================================== 00:28:59.553 [2024-11-04T11:33:34.123Z] Total : 8150.95 31.84 10298.37 0.00 6912.78 535.89 15510.19 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:59.553 rmmod nvme_tcp 00:28:59.553 rmmod nvme_fabrics 00:28:59.553 rmmod nvme_keyring 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 1822404 ']' 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 1822404 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1822404 ']' 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1822404 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1822404 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1822404' 00:28:59.553 killing process with pid 1822404 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1822404 00:28:59.553 12:33:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1822404 00:28:59.553 12:33:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:59.553 12:33:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:59.553 12:33:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:59.553 12:33:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:59.553 12:33:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:59.553 12:33:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:28:59.553 12:33:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:28:59.553 12:33:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:59.553 12:33:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:59.553 12:33:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.553 12:33:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.553 12:33:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:02.100 00:29:02.100 real 0m27.779s 00:29:02.100 user 1m3.114s 00:29:02.100 sys 0m7.205s 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:02.100 ************************************ 00:29:02.100 END TEST nvmf_bdevperf 00:29:02.100 ************************************ 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.100 ************************************ 00:29:02.100 START TEST nvmf_target_disconnect 00:29:02.100 ************************************ 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:02.100 * Looking for test storage... 00:29:02.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:02.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.100 --rc genhtml_branch_coverage=1 00:29:02.100 --rc genhtml_function_coverage=1 00:29:02.100 --rc genhtml_legend=1 00:29:02.100 --rc geninfo_all_blocks=1 00:29:02.100 --rc geninfo_unexecuted_blocks=1 00:29:02.100 00:29:02.100 ' 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:02.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.100 --rc genhtml_branch_coverage=1 00:29:02.100 --rc genhtml_function_coverage=1 00:29:02.100 --rc genhtml_legend=1 00:29:02.100 --rc geninfo_all_blocks=1 00:29:02.100 --rc geninfo_unexecuted_blocks=1 00:29:02.100 00:29:02.100 ' 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:02.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.100 --rc genhtml_branch_coverage=1 00:29:02.100 --rc genhtml_function_coverage=1 00:29:02.100 --rc genhtml_legend=1 00:29:02.100 --rc geninfo_all_blocks=1 00:29:02.100 --rc geninfo_unexecuted_blocks=1 00:29:02.100 00:29:02.100 ' 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:02.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.100 --rc genhtml_branch_coverage=1 00:29:02.100 --rc genhtml_function_coverage=1 00:29:02.100 --rc genhtml_legend=1 00:29:02.100 --rc geninfo_all_blocks=1 00:29:02.100 --rc geninfo_unexecuted_blocks=1 00:29:02.100 00:29:02.100 ' 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.100 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:02.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:02.101 12:33:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:10.243 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:10.243 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:10.243 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:10.243 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:10.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:29:10.243 00:29:10.243 --- 10.0.0.2 ping statistics --- 00:29:10.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.243 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:29:10.243 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:10.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:29:10.243 00:29:10.243 --- 10.0.0.1 ping statistics --- 00:29:10.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.244 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:10.244 ************************************ 00:29:10.244 START TEST nvmf_target_disconnect_tc1 00:29:10.244 ************************************ 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:10.244 [2024-11-04 12:33:43.956254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.244 [2024-11-04 12:33:43.956314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8ba0 with addr=10.0.0.2, port=4420 00:29:10.244 [2024-11-04 12:33:43.956346] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:10.244 [2024-11-04 12:33:43.956363] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:10.244 [2024-11-04 12:33:43.956370] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:10.244 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:10.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:10.244 Initializing NVMe Controllers 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:10.244 00:29:10.244 real 0m0.116s 00:29:10.244 user 0m0.045s 00:29:10.244 sys 0m0.071s 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:10.244 12:33:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:10.244 ************************************ 00:29:10.244 END TEST nvmf_target_disconnect_tc1 00:29:10.244 ************************************ 00:29:10.244 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:10.244 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:10.244 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:10.244 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:10.244 ************************************ 00:29:10.244 START TEST nvmf_target_disconnect_tc2 00:29:10.244 ************************************ 00:29:10.244 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:29:10.244 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:10.244 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:10.244 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:10.244 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:10.244 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.244 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1828539 00:29:10.244 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1828539 00:29:10.244 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:10.244 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1828539 ']' 00:29:10.244 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.244 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:10.244 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.244 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:10.244 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.244 [2024-11-04 12:33:44.124798] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:29:10.244 [2024-11-04 12:33:44.124862] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.244 [2024-11-04 12:33:44.213394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:10.244 [2024-11-04 12:33:44.265437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.244 [2024-11-04 12:33:44.265494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.244 [2024-11-04 12:33:44.265504] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.244 [2024-11-04 12:33:44.265511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.244 [2024-11-04 12:33:44.265517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.244 [2024-11-04 12:33:44.267948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:10.244 [2024-11-04 12:33:44.268110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:10.244 [2024-11-04 12:33:44.268271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:10.244 [2024-11-04 12:33:44.268271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:10.505 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:10.506 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:10.506 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:10.506 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:10.506 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.506 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.506 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:10.506 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.506 12:33:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.506 Malloc0 00:29:10.506 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.506 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:10.506 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.506 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.506 [2024-11-04 12:33:45.031824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.506 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.506 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:10.506 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.506 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.506 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.506 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:10.506 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.506 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.506 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.506 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:10.506 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.506 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.506 [2024-11-04 12:33:45.072236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.766 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.766 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:10.766 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.766 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.766 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.766 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1828661 00:29:10.766 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:10.766 12:33:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:12.684 12:33:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1828539 00:29:12.684 12:33:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:12.684 Read completed with error (sct=0, sc=8) 00:29:12.684 starting I/O failed 00:29:12.684 Read completed with error (sct=0, sc=8) 00:29:12.684 starting I/O failed 00:29:12.684 Read completed with error (sct=0, sc=8) 00:29:12.684 starting I/O failed 00:29:12.684 Read completed with error (sct=0, sc=8) 00:29:12.684 starting I/O failed 00:29:12.684 Read completed with error (sct=0, sc=8) 00:29:12.684 starting I/O failed 00:29:12.684 Read completed with error (sct=0, sc=8) 00:29:12.684 starting I/O failed 00:29:12.684 Read completed with error (sct=0, sc=8) 00:29:12.684 starting I/O failed 00:29:12.684 Read completed with error (sct=0, sc=8) 00:29:12.684 starting I/O failed 00:29:12.684 Read completed with error (sct=0, sc=8) 00:29:12.684 starting I/O failed 00:29:12.684 Write completed with error (sct=0, sc=8) 00:29:12.684 starting I/O failed 00:29:12.684 Read completed with error (sct=0, sc=8) 00:29:12.684 starting I/O failed 00:29:12.684 Read completed with error (sct=0, sc=8) 00:29:12.684 starting I/O failed 00:29:12.684 Read completed with error (sct=0, sc=8) 00:29:12.684 starting I/O failed 00:29:12.684 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 [2024-11-04 12:33:47.105587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Write completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 Read completed with error (sct=0, sc=8) 00:29:12.685 starting I/O failed 00:29:12.685 [2024-11-04 12:33:47.105862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.685 [2024-11-04 12:33:47.106224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.106242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.106440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.106447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.106630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.106638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.106705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.106712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.107024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.107033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.107359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.107367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.107562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.107572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.107757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.107766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.108177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.108185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.108515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.108523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.108710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.108717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.108913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.108921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.109198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.109206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.109513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.109521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.109768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.109777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.110067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.110074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.110296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.110304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.110650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.110658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.110868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.110876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.111217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.111225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.685 [2024-11-04 12:33:47.111444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.685 [2024-11-04 12:33:47.111452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.685 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.111763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.111772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.111969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.111977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.112087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.112096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.112403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.112410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.112617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.112626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.112814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.112822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.113065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.113073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.113361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.113369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.113563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.113571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.113835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.113844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.114166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.114174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.114309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.114317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.114500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.114508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.114810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.114818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.114865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.114872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.115048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.115056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.115239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.115247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.115527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.115535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.115861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.115869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.116221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.116229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.116530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.116538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.116858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.116866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.117164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.117172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.117365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.117374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.117686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.117693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.117848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.117859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.118143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.118152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.118312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.118320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.118592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.118600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.118927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.118935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.119222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.119230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.119541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.119549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.119909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.119917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.120208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.120216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.120555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.120563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.120762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.120770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.120935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.120944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.121279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.686 [2024-11-04 12:33:47.121287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.686 qpair failed and we were unable to recover it. 00:29:12.686 [2024-11-04 12:33:47.121496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.121504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.121690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.121698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.121995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.122004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.122343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.122351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.122652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.122660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.122973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.122981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.123096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.123103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.123406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.123414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.123600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.123608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.123942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.123950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.124279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.124287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.124580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.124588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.124944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.124952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.125219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.125227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.125543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.125551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.125815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.125823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.126130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.126138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.126308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.126316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.126544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.126552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.126785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.126793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.127092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.127099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.127426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.127433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.127761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.127769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.128065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.128072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.128387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.128394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.128698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.128706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.129029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.129036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.129202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.129209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.129532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.129539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.129855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.129863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.130176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.130183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.130476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.130483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.131369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.131389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.131673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.131681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.132005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.132013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.132279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.132286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.132619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.132627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.132932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-04 12:33:47.132941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-04 12:33:47.133231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.133238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.133593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.133600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.133904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.133912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.134208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.134216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.134513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.134520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.134849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.134857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.135159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.135166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.135448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.135455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.135788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.135795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.136084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.136092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.136393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.136400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.136637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.136644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.136967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.136975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.137044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.137052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.137214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.137221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.137534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.137541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.137743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.137771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.138073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.138080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.138367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.138374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.138666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.138673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.138845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.138852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.139014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.139021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.139313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.139319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.139506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.139513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.139810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.139818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.140138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.140145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.140492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.140499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.140783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.140790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.141005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.141012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.141263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.141269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.141551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.141558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.141874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.141881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.142169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.142177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.142480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.142487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.142849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.142856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.143087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.143093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.143364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.143371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.143695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-04 12:33:47.143702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-04 12:33:47.144045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.144052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.144358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.144365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.144650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.144657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.145029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.145037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.145213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.145220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.145482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.145489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.145802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.145809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.146176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.146184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.146467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.146474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.146771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.146779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.147132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.147138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.147435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.147443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.147768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.147776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.147965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.147973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.148276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.148283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.148594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.148601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.148902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.148910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.149223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.149230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.149521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.149530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.149684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.149691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.150055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.150062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.150231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.150238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.150532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.150547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.150856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.150864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.151207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.151215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.151503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.151510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.151807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.151814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.152124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.152131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.152422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.152429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.152728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.152735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.153029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-04 12:33:47.153036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-04 12:33:47.153212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.153219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.153521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.153528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.153705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.153712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.154028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.154036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.154347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.154353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.154687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.154695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.154997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.155004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.155328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.155335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.155646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.155654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.155811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.155819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.156003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.156011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.156317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.156325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.156611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.156619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.156924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.156931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.157220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.157228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.157525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.157532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.157815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.157823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.158146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.158153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.158520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.158527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.158816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.158824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.159139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.159146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.159435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.159442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.159764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.159771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.160075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.160082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.160414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.160422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.160754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.160762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.160943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.160950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.161278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.161286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.161670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.161677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.161984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.161992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.162286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.162293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.162662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.162669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.162975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.162982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.163143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.163151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.163460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.163468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.163779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.163787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.164003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.164010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.164331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.164338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-04 12:33:47.164638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-04 12:33:47.164644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.164964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.164972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.165276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.165284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.165452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.165460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.165771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.165779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.166150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.166156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.166322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.166329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.166534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.166541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.166876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.166883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.167203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.167209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.167506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.167513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.167803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.167811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.168126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.168133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.168428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.168435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.168736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.168742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.169019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.169026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.169368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.169376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.169703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.169710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.170012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.170021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.170318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.170326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.170666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.170674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.171044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.171052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.171350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.171357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.171530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.171537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.171770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.171778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.172081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.172088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.172454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.172460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.172760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.172768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.172976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.172983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.173323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.173331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.173622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.173629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.173959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.173967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.174260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.174267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.174590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.174597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.174912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.174919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.175208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.175215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.175539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.175546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.175829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.175836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.176143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-04 12:33:47.176151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-04 12:33:47.176455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.176462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.176771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.176778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.177048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.177056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.177361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.177369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.177677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.177685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.177892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.177900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.178195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.178202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.178501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.178508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.178801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.178817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.179010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.179017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.179299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.179306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.179641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.179648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.179967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.179974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.180279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.180286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.180573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.180581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.180903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.180910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.181285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.181293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.181581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.181588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.181899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.181907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.182224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.182230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.182523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.182531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.182841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.182849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.183221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.183228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.183511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.183518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.183798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.183805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.184110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.184117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.184337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.184344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.184663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.184670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.185000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.185007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.185295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.185302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.185619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.185628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.185957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.185965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.186262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.186269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.186583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.186591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.186902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.186909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.187208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.187215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.187523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.187530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.187836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.187843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-04 12:33:47.188179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-04 12:33:47.188186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.188489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.188496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.188781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.188788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.189119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.189126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.189439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.189446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.189756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.189767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.190070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.190077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.190397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.190405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.190710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.190717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.191043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.191051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.191362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.191369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.191689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.191696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.192006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.192013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.192289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.192297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.192606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.192614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.192963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.192971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.193287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.193295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.193606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.193614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.193913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.193922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.194235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.194242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.194547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.194555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.194754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.194762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.195136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.195143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.195453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.195460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.195671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.195679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.195871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.195879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.196179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.196186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.196473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.196480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.196797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.196804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.197102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.197110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.197435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.197443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.197765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.197773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.198021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.198030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.198268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.198275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.198429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.198437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-04 12:33:47.198626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-04 12:33:47.198634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.198798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.198806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.199103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.199110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.199403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.199410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.199709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.199716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.200031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.200039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.200350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.200357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.200653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.200660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.200994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.201001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.201320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.201327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.201625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.201632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.201797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.201805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.202081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.202095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.202419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.202426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.202704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.202711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.203033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.203040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.203327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.203339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.203641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.203648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.203862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.203869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.204173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.204181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.204492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.204499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.204794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.204801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.205139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.205147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.205461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.205469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.205784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.205793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.206133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.206140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.206453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.206460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.206778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.206786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.207088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.207095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.207244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.207251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.207443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.207450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.207757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.207765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.207957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.207965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.208180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.208187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.208372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.208379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.208660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.208667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.208983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.208990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-04 12:33:47.209271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-04 12:33:47.209279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.209482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.209489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.209805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.209813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.210002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.210010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.210356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.210363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.210667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.210674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.211011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.211018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.211293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.211300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.211609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.211616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.211892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.211899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.212209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.212216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.212516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.212523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.212850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.212857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.213135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.213142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.213448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.213456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.213754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.213766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.214037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.214045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.214351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.214358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.214652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.214660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.214969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.214978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.215284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.215292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.215460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.215469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.215772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.215780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.216063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.216071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.216396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.216403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.216717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.216724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.217018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.217025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.217312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.217319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.217510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.217517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.217818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.217826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.218113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.218128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.218431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.218438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.218748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.218755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.219074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.219081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.219389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.219396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.219700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.219707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.220013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.220021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.220326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.220332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.220639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-04 12:33:47.220647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-04 12:33:47.220948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.220956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.221268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.221277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.221580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.221586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.221880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.221888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.222187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.222194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.222512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.222519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.222806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.222814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.223135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.223141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.223446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.223453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.223605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.223613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.223938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.223945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.224240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.224248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.224538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.224545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.224854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.224861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.225177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.225184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.225370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.225378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.225678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.225685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.225951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.225958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.226276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.226283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.226591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.226598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.226906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.226913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.227291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.227298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.227605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.227612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.227914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.227921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.228213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.228219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.228503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.228510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.228816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.228823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.229128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.229135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.229440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.229447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.229618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.229625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.229888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.229896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.230216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.230223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.230533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.230540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.230827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.230835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.231154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.231161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.231473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.231480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.231786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.231794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.232115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-04 12:33:47.232122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-04 12:33:47.232421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.232429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.232618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.232626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.232932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.232940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.233222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.233231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.233555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.233562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.233872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.233880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.234201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.234208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.234498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.234512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.234816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.234824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.235128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.235135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.235461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.235468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.235774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.235782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.236097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.236104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.236398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.236405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.236728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.236735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.237097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.237104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.237296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.237303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.237631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.237638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.238011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.238018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.238323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.238330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.238634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.238643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.238819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.238828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.239041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.239048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.239378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.239384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.239705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.239712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.239901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.239909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.240230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.240237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.240432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.240439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.240760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.240768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.241071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.241078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.241360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.241367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.241664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.241670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.241980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.241988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.242300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.242307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.242599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.242607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.242915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.242923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.243212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.243219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.243520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-04 12:33:47.243527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-04 12:33:47.243798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-04 12:33:47.243806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-04 12:33:47.244108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-04 12:33:47.244115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-04 12:33:47.244399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-04 12:33:47.244414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-04 12:33:47.244754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-04 12:33:47.244762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-04 12:33:47.245071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-04 12:33:47.245078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.245377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.245387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.246034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.246050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.246330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.246338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.246645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.246653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.246987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.246996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.247200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.247207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.247476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.247483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.247817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.247824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.248120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.248127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.248452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.248459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.248840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.248848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.249159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.249166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.249543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.249549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.249858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.249866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.250178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.250185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.250484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.250491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.250775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.250783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.250964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.250971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.251292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.251299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.251564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.251571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.251899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.251907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.252227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.252235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.252397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.252405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.252679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.252687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.252889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.252897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.253183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.253190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.253512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.253519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.253682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-11-04 12:33:47.253690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-11-04 12:33:47.253990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.253999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.254216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.254223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.254504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.254512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.254834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.254842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.255144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.255151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.255320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.255327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.255524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.255531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.255714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.255720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.256087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.256095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.256427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.256434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.256740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.256751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.257053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.257060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.257356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.257365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.257560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.257567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.257867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.257875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.258224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.258231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.258394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.258401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.258714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.258721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.259020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.259028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.259356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.259362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.259757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.259764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.260039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.260046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.260363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.260369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.260655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.260662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.260961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.260968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.261274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.261281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.261570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.261577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.261890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.261897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.262125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.262132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.262320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.262328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.262653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.262661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.262841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.262850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.263141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.263148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.263335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.263343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.263686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.263694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.263985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.263993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-11-04 12:33:47.264289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-11-04 12:33:47.264297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.264482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.264489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.264788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.264796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.265124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.265131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.265419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.265426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.265785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.265792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.266125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.266133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.266484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.266490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.266786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.266794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.267119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.267126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.267434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.267441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.267807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.267815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.268103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.268110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.268440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.268446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.268726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.268734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.269036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.269044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.269345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.269354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.269537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.269544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.269827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.269835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.270238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.270245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.270531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.270546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.270852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.270859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.271144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.271152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.271444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.271452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.271772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.271780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.272074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.272081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.272370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.272378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.272485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.272491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.272805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.272813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.273188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.273195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.273587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.273595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.273915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.273922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.274205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.274212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.274503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.274510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.274819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.274827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.275154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.275161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.275450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.275458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.275652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.275659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.275971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-11-04 12:33:47.275978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-11-04 12:33:47.276271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.276278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.276586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.276593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.276876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.276883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.277201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.277209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.277515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.277522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.277836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.277843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.278177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.278185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.278395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.278403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.278726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.278733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.279080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.279088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.279399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.279407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.279729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.279737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.280061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.280069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.280375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.280383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.280687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.280694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.280994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.281003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.281312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.281320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.281509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.281518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.281825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.281832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.282152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.282158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.282484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.282490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.282778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.282792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.283104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.283111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.283416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.283423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.283713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.283720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.284028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.284036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.284341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.284348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.284637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.284645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.284948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.284956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.285131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.285139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.285459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.285466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.285774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.285782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.286105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.286112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.286399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.286406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.286720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.286727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.287104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.287111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.287424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.287432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-11-04 12:33:47.287732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-11-04 12:33:47.287740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.288061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.288068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.288304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.288311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.288684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.288692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.288992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.289000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.289295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.289303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.289669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.289677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.289990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.290000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.290290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.290298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.290610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.290618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.290771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.290780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.291178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.291184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.291507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.291513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.291822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.291829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.292187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.292194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.292476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.292483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.292791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.292798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.293081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.293088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.293395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.293402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.293686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.293693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.293998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.294006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.294321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.294328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.294645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.294653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.294997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.295004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.295313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.295320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.295613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.295621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.295908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.295916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.296223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.296230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.296534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.296542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.296833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.296841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.296996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.297004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.297287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-11-04 12:33:47.297294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-11-04 12:33:47.297667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.297675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.297994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.298001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.298309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.298315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.298610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.298617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.298927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.298935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.299249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.299261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.299568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.299575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.299890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.299898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.300209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.300216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.300508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.300516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.300838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.300845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.301120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.301127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.301443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.301450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.301766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.301774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.302084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.302091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.302398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.302406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.302718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.302725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.303062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.303071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.303375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.303382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.303674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.303681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.304058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.304065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.304356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.304364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.304655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.304661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.304959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.304966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.305319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.305326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.305609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.305616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.305914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.305922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.306214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.306221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.306530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.306538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.306768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.306777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.307105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.307112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.307420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.307427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.307737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.307744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.308052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.308059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.308243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.308251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.308548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.308556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.308904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.308913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.309220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-11-04 12:33:47.309228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-11-04 12:33:47.309487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.309495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.309824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.309832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.310157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.310164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.310470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.310478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.310771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.310780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.311106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.311113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.311392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.311399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.311687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.311695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.312005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.312012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.312318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.312327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.312618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.312626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.312914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.312922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.313237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.313244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.313532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.313540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.313855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.313863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.314049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.314057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.314392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.314399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.314710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.314719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.315054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.315063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.315375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.315382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.315690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.315697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.316004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.316012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.316370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.316377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.316672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.316680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.316992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.316999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.317293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.317300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.317622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.317629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.317835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.317842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.318102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.318109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.318426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.318432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.318715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.318732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.319069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.319077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.319381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.319388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.319575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.319581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.319802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.319809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.320139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.320146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.320414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.320422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.320786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-11-04 12:33:47.320793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-11-04 12:33:47.321077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.321083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.321402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.321409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.321772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.321779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.322088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.322095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.322368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.322376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.322561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.322568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.322933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.322940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.323261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.323268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.323575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.323582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.323891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.323899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.324208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.324215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.324511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.324518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.324827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.324835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.325201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.325208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.325493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.325500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.325696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.325703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.325984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.325993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.326293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.326300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.326604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.326613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.326917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.326926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.327221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.327229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.327533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.327540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.327816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.327823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.328158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.328165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.328456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.328464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.328657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.328664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.328960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.328967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.329282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.329289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.329481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.329488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.329882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.329889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.330190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.330197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.330477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.330485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.330806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.330815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.330988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.330996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.331034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.331041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.331386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.331393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.331681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.331688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.332003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-11-04 12:33:47.332010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-11-04 12:33:47.332320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.332326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.332612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.332618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.332920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.332928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.333213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.333220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.333501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.333508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.333794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.333801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.333989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.333996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.334161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.334168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.334446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.334453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.334604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.334612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.334913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.334920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.335223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.335230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.335397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.335404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.335693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.335699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.335992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.336000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.336260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.336267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.336555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.336562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.336873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.336880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.337055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.337063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.337380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.337387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.337707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.337714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.338014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.338023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.338326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.338333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.338643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.338651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.338960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.338968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.339252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.339266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.339598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.339606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.339768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.339776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.340111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.340118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.340404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.340411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.340723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.340729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.341062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.341070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.341374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.341381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.341691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.341699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.341897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.341905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.342234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.342241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.342410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.342417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-11-04 12:33:47.342731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-11-04 12:33:47.342737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.343063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.343071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.343379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.343386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.343675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.343682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.344002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.344009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.344291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.344298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.344586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.344593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.344845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.344852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.345166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.345172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.345469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.345476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.345792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.345799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.346078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.346085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.346386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.346394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.346705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.346712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.347006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.347014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.347314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.347322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.347604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.347612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.347914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.347921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.348204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.348211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.348518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.348525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.348833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.348840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.349175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.349182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.349474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.349481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.349800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.349808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.350116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.350125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.350464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.350471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.350779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.350787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.351123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.351130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.351434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.351441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.351749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.351757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.352044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.352051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.352360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.352368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.352678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-11-04 12:33:47.352686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-11-04 12:33:47.352978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.352985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.353297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.353304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.353609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.353617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.353906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.353914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.354220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.354228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.354530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.354538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.354827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.354835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.355144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.355150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.355436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.355450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.355763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.355770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.356081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.356088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.356402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.356410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.356696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.356703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.357021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.357029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.357336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.357344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.357695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.357702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.357901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.357909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.358191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.358198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.358489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.358497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.358768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.358776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.359005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.359013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.359314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.359322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.359656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.359663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.359844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.359852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.360196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.360204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.360506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.360514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.360829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.360836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.361161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.361168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.361479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.361486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.361797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.361804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.362171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.362179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.362484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.362492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.362801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.362809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.363174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.363181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.363494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.363502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.363821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.363828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.364144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.364152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-11-04 12:33:47.364455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-11-04 12:33:47.364463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.364778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.364785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.364957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.364965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.365300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.365308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.365612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.365618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.365910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.365917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.366240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.366248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.366526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.366534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.366821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.366829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.367137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.367144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.367460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.367468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.367787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.367795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.368108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.368115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.368447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.368456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.368808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.368816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.369152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.369159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.369471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.369478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.369789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.369797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.370112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.370119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.370409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.370417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.370754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.370766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.371064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.371071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.371380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.371387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.371712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.371720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.372039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.372047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.372358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.372367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.372655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.372664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.372971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.372979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.373285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.373293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.373516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.373524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.373833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.373841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.374160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.374168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.374461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.374468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.374783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.374791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.375095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.375104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.375832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.375847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.376157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.376165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.376472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.376480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.376782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-11-04 12:33:47.376790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-11-04 12:33:47.377118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.377125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.377417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.377424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.377739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.377748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.378024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.378032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.378330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.378337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.378554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.378561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.378849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.378857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.379209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.379216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.379504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.379511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.379806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.379814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.380150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.380158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.380464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.380472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.380774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.380784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.381099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.381107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.381419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.381426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.381733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.381740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.382030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.382038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.382419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.382426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.382756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.382768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.383093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.383100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.383444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.383451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.383752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.383759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.383985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.383992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.384312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.384319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.384693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.384701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.385000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.385007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.385298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.385305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.385626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.385633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.385948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.385956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.386254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.386261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.386647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.386655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.386959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.386967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.387279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.387286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.387600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.387607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.387917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.387924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.388230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.388239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.388565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.388573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-11-04 12:33:47.388873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-11-04 12:33:47.388881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.389194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.389203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.389472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.389478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.389805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.389812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.390104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.390111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.390435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.390443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.390752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.390764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.391086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.391094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.391414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.391422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.391608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.391616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.391926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.391934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.392251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.392259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.392568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.392576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.392893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.392901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.393200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.393207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.393518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.393526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.393852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.393861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.394166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.394175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.394362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.394369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.394691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.394699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.394998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.395007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.395293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.395302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.395609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.395617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.395930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.395939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.396264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.396272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.396583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.396591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.396797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.396806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.397081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.397090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.397390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.397398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.397701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.397710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.397887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.397896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.398117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.398124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.398400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.398408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.398738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.398748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.399064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.399073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.399380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.399389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.399686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-11-04 12:33:47.399695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-11-04 12:33:47.400044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.400059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.400347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.400360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.400657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.400666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.401003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.401011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.401345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.401359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.401698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.401708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.402040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.402048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.402392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.402405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.402635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.402644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.402960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.402969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.403282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.403294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.403588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.403598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.403788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.403797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.404016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.404025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.404333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.404345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.404569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.404582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.404917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.404927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.405233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.405241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.405550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.405563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.405893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.405903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.406237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.406245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.406454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.406467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.406757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.406772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.407110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.407118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.407455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.407468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.407775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.407785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.408070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.408088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.408286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.408294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.408502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.408515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.408854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.408864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.409212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.409221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.409734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.409757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.410105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.410119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.410480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.410490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-11-04 12:33:47.410797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-11-04 12:33:47.410807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.411030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.411038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.411299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.411306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.411568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.411578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.411741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.411766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.412090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.412103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.412422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.412435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.412751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.412769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.413091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.413102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.413298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.413306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.413620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.413628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.413963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.413972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.414293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.414303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.414503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.414516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.414817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.414827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.415187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.415196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.415510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.415518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.415777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.415785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.416110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.416124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.416319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.416332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.416607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.416617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.416953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.416961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.417261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.417269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.417574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.417582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.417878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.417886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.418190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.418204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.418419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.418433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.418753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.418766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.418960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.418970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.419276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.419285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.419445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.419453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.419778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.419786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.420063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.420071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.420375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-11-04 12:33:47.420388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-11-04 12:33:47.420482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1460ed0 is same with the state(6) to be set 00:29:12.988 Read completed with error (sct=0, sc=8) 00:29:12.988 starting I/O failed 00:29:12.988 Read completed with error (sct=0, sc=8) 00:29:12.988 starting I/O failed 00:29:12.988 Read completed with error (sct=0, sc=8) 00:29:12.988 starting I/O failed 00:29:12.988 Read completed with error (sct=0, sc=8) 00:29:12.988 starting I/O failed 00:29:12.988 Read completed with error (sct=0, sc=8) 00:29:12.988 starting I/O failed 00:29:12.988 Read completed with error (sct=0, sc=8) 00:29:12.988 starting I/O failed 00:29:12.988 Read completed with error (sct=0, sc=8) 00:29:12.988 starting I/O failed 00:29:12.988 Read completed with error (sct=0, sc=8) 00:29:12.988 starting I/O failed 00:29:12.988 Read completed with error (sct=0, sc=8) 00:29:12.988 starting I/O failed 00:29:12.988 Read completed with error (sct=0, sc=8) 00:29:12.988 starting I/O failed 00:29:12.988 Read completed with error (sct=0, sc=8) 00:29:12.988 starting I/O failed 00:29:12.988 Read completed with error (sct=0, sc=8) 00:29:12.988 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 [2024-11-04 12:33:47.421362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Write completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 Read completed with error (sct=0, sc=8) 00:29:12.989 starting I/O failed 00:29:12.989 [2024-11-04 12:33:47.422351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.989 [2024-11-04 12:33:47.422684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.422703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-11-04 12:33:47.423023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.423037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-11-04 12:33:47.423358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.423368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-11-04 12:33:47.423657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.423675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-11-04 12:33:47.423998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.424008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-11-04 12:33:47.424203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.424219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-11-04 12:33:47.424462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.424474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-11-04 12:33:47.424774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.424785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-11-04 12:33:47.425106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.425116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-11-04 12:33:47.425518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.425529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-11-04 12:33:47.425827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.425839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-11-04 12:33:47.426148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.426159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-11-04 12:33:47.426470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.426481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-11-04 12:33:47.426814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.426825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-11-04 12:33:47.427137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.427147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-11-04 12:33:47.427447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.427458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-11-04 12:33:47.427769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.427780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-11-04 12:33:47.427987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.427998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-11-04 12:33:47.428322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.428333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-11-04 12:33:47.428663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-11-04 12:33:47.428673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.428822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.428833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.429140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.429152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.429493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.429505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.429823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.429835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.430155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.430165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.430366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.430379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.430701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.430714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.431047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.431059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.431395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.431406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.431703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.431722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.432039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.432050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.432334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.432344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.432732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.432743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.432845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.432860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.433087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.433099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.433446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.433456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.433755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.433765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.434107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.434118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.434438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.434449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.434802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.434816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.435104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.435114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.435403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.435414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.435734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.435744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.436084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.436096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.436390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.436401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.436710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.436721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.436943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.436954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.437144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.437154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.437449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.437460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.437632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.437643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.437965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.437976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.438191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.438201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.438502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.438513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.438704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.438715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.439065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.439077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.439430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.439442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.439651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.439662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.439984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-11-04 12:33:47.440013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-11-04 12:33:47.440342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.440352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.440522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.440537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.440744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.440764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.441207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.441235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.441538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.441547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.441949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.441966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.442292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.442305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.442671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.442684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.443010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.443024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.443376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.443389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.443693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.443706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.444007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.444021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.444411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.444425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.444719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.444727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.445045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.445053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.445360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.445367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.445547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.445559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.445854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.445866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.446074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.446088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.446296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.446309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.446548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.446557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.446893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.446900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.447203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.447217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.447393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.447402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.447581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.447588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.447891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.447898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.448203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.448210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.448448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.448455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.448788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.448796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.449109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.449116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.449432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.449439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.449766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.449774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.450057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.450064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.450368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-11-04 12:33:47.450375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-11-04 12:33:47.450687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.450693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.451000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.451008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.451329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.451335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.451655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.451662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.451860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.451867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.452129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.452136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.452467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.452473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.452827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.452835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.453020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.453027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.453253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.453260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.453557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.453565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.453873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.453880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.454211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.454218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.454538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.454545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.454778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.454785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.455073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.455080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.455402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.455409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.455719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.455726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.456056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.456064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.456368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.456375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.456683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.456690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.457014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.457022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.457328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.457335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.457652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.457660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.457958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.457965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.458271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.458278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.458580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.458588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.458802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.458810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.459125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.459134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.459415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.459422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.459731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.459738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.460050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.460058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.460408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.460415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.460720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.460727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.461035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.461042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.461350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.461357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.461691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.461698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-11-04 12:33:47.462003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-11-04 12:33:47.462011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.462318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.462325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.462614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.462621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.462913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.462920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.463237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.463244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.463422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.463429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.463764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.463771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.464090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.464098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.464401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.464408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.464624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.464631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.464914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.464922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.465228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.465234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.465551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.465558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.465881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.465889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.466265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.466272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.466587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.466595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.466793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.466800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.467079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.467086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.467444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.467451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.467765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.467772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.468046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.468053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.468360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.468367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.468674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.468681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.468987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.468994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.469303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.469310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.469613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.469620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.469938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.469945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.470256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.470264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.470571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.470579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.470762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.470770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.471041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.471048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.471332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.471340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.471646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.471653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.471950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.471958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.472304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.472311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.472690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.472697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.472967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.472974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.473298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.993 [2024-11-04 12:33:47.473305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.993 qpair failed and we were unable to recover it. 00:29:12.993 [2024-11-04 12:33:47.473604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.473611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.473918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.473925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.474236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.474243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.474537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.474545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.474854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.474861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.475219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.475226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.475511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.475519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.475838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.475845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.476158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.476165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.476468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.476475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.476781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.476789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.477119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.477127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.477428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.477435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.477741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.477750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.478040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.478048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.478376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.478383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.478688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.478695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.479005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.479012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.479310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.479316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.479624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.479631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.479914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.479922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.480220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.480228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.480417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.480426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.480720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.480727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.480922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.480929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.481232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.481239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.481495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.481502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.481837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.481844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.482140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.482147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.482466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.482473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.482756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.482763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.483062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.483071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.483388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.483396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.483698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.483707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.484026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.484034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.484344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.484351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.484665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.484672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.484842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.994 [2024-11-04 12:33:47.484851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.994 qpair failed and we were unable to recover it. 00:29:12.994 [2024-11-04 12:33:47.485178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.485187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.485513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.485520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.485823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.485830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.486161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.486168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.486498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.486505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.486789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.486797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.487012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.487019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.487367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.487373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.487560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.487568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.487851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.487858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.488229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.488236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.488544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.488551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.488906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.488913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.489208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.489215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.489508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.489515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.489834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.489841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.490042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.490048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.490316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.490323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.490589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.490596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.490768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.490776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.491162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.491168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.491526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.491533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.491827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.491835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.492045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.492052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.492269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.492276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.492520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.492528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.492693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.492700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.492973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.492980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.493170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.493177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.493500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.493507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.493705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.493713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.494072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.494080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.494385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.494392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.494709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.494723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.494951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.494965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.495303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.495315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.495618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.995 [2024-11-04 12:33:47.495626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.995 qpair failed and we were unable to recover it. 00:29:12.995 [2024-11-04 12:33:47.495813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.495821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.496117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.496126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.496387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.496395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.496694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.496707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.497039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.497053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.497388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.497398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.497581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.497590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.497906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.497914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.498214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.498222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.498531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.498543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.498873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.498883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.499212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.499220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.499535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.499543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.499754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.499762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.500029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.500042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.500368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.500378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.500684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.500692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.500952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.500960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.501268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.501276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.501622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.501635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.501818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.501832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.502042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.996 [2024-11-04 12:33:47.502054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.996 qpair failed and we were unable to recover it. 00:29:12.996 [2024-11-04 12:33:47.502386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.502396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.502630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.502637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.502943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.502951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.503252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.503260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.503443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.503450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.503825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.503838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.504204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.504214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.504385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.504393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.504687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.504695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.505004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.505012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.505320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.505331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.505637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.505649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.506037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.506051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.506341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.506355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.506559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.506572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.506856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.506864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.507194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.507206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.507527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.507534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.507812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.507819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.508016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.508023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.508298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.508305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.508631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.508639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.508941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.508949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.509248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.509255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.509555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.509561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.509870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.509877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.510183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.510190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.510496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.510504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.510815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.510823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.511140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.511147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-04 12:33:47.511456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-04 12:33:47.511463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.511688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.511695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.512093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.512100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.512409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.512417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.512712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.512720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.513046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.513054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.513323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.513331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.513648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.513656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.513968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.513976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.514259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.514266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.514609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.514617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.514832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.514840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.515146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.515153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.515432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.515439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.515758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.515765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.516086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.516094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.516402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.516409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.516708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.516716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.517044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.517051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.517399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.517406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.517716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.517723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.517944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.517951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.518282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.518289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.518512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.518520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.518830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.518837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.519126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.519133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.519327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.519338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.519528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.519535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.519845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.519852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.520190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.520197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.520516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.520524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.520836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.520843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-04 12:33:47.521018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-04 12:33:47.521026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.521257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.521265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.521584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.521591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.521853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.521860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.522159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.522166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.522339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.522347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.522649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.522657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.522840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.522848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.523059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.523066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.523454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.523462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.523740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.523751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.524063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.524071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.524401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.524408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.524578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.524586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.524775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.524783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.525072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.525079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.525380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.525387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.525694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.525701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.526013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.526021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.526329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.526337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-04 12:33:47.526646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-04 12:33:47.526653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:13.273 [2024-11-04 12:33:47.526942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-11-04 12:33:47.526952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-11-04 12:33:47.527255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-11-04 12:33:47.527264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-11-04 12:33:47.527570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-11-04 12:33:47.527580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-11-04 12:33:47.527808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-11-04 12:33:47.527815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-11-04 12:33:47.528161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-11-04 12:33:47.528168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-11-04 12:33:47.528372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-11-04 12:33:47.528379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-11-04 12:33:47.528723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-11-04 12:33:47.528731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-11-04 12:33:47.529254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-11-04 12:33:47.529270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-11-04 12:33:47.529584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-11-04 12:33:47.529591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-11-04 12:33:47.529803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-11-04 12:33:47.529810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-11-04 12:33:47.530188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-11-04 12:33:47.530195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-11-04 12:33:47.530482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-11-04 12:33:47.530489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-11-04 12:33:47.530799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-11-04 12:33:47.530807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-11-04 12:33:47.531181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-11-04 12:33:47.531188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-11-04 12:33:47.531501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.531508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.531818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.531825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.532021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.532028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.532274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.532281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.532574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.532582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.532894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.532902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.533269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.533277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.533580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.533586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.533906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.533914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.534241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.534247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.534544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.534551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.534756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.534764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.534981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.534987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.535293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.535300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.535620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.535627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.535838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.535845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.536176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.536182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.536488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.536495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.536799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.536806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.537180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.537187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.537484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.537491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.537647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.537654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.537937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.537944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.538262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.538269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.538624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.538631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.538921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.538929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.539133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.539142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.539448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.539455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.539789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.539796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.540020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.540026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.540297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.540304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.540595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.540602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.540780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.540788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.541082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.541090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.541295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.541303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.541612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.541619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.541915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.541924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.541973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-11-04 12:33:47.541980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-11-04 12:33:47.542222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.542230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.542532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.542540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.542844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.542853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.543133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.543140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.543466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.543474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.543780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.543788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.544089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.544097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.544391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.544399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.544706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.544714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.545014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.545022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.545317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.545325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.545630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.545638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.545938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.545946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.546259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.546267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.546568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.546576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.546858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.546867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.547169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.547177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.547431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.547440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.547792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.547800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.547982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.547991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.548090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.548097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.548380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.548388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.548715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.548723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.548898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.548905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.549194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.549201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.549497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.549505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.549786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.549794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.550097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.550105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.550392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.550402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.550708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.550716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.551083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.551092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.551383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.551391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.551698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.551705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.552048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.552057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.552345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.552353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.552658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.552666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-11-04 12:33:47.552967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-11-04 12:33:47.552976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.553316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.553324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.553510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.553518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.553700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.553707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.554002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.554010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.554364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.554372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.554704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.554712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.555021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.555029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.555228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.555236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.555548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.555556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.555724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.555732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.556047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.556055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.556361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.556370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.556696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.556704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.557013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.557021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.557328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.557336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.557627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.557635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.557912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.557920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.558235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.558242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.558413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.558421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.558728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.558736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.559069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.559078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.559341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.559349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.559661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.559669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.559993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.560001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.560288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.560294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.560617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.560624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.560914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.560921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.561152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.561158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.561451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.561458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.561785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.561792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.562089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.562096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.562378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.562386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.562692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.562699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.562992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.563000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.563303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.563310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.563622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.563629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.563943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-11-04 12:33:47.563950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-11-04 12:33:47.564259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.564266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.564576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.564583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.564867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.564875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.565183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.565190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.565499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.565506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.565831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.565837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.566145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.566152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.566464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.566472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.566772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.566780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.567096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.567103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.567412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.567420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.567699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.567706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.567984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.567992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.568302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.568308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.568617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.568624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.568927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.568934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.569226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.569234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.569543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.569550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.569737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.569744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.570006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.570014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.570328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.570342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.570661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.570669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.570959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.570967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.571291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.571299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.571605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.571612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.571897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.571912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.572210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.572218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.572545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.572553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.572865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.572873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.573205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.573212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.573519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.573527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.574009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.574022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.574178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.574186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.574383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.574391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.574571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.574581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.574836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.574843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.575150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-11-04 12:33:47.575158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-11-04 12:33:47.575421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.575428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.575727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.575735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.576523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.576539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.576719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.576727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.577491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.577505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.577778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.577787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.578659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.578676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.579007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.579016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.579701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.579715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.580049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.580057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.580372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.580379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.580724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.580731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.581081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.581088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.581352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.581359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.581640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.581648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.581948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.581956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.582236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.582242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.582569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.582576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.582888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.582895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.583209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.583217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.583516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.583524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.583804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.583811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.584109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.584116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.584395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.584402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.584581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.584589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.584798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.584806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.585164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.585171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.585487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.585494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.585798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.585805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.586118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.586126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.586418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.586425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.586715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-11-04 12:33:47.586722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-11-04 12:33:47.587035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.587042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.587305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.587313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.587639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.587646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.587940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.587947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.588270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.588278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.588606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.588615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.588817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.588824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.589116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.589123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.589430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.589437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.589642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.589649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.589926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.589933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.590112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.590119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.590310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.590317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.590636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.590643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.590928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.590936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.591249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.591256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.591581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.591588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.591916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.591923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.592207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.592215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.592528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.592536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.592820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.592827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.593017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.593024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.593328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.593335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.593639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.593647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.593925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.593932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.594230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.594237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.594522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.594529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.594864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.594872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.595058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.595065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.595395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.595403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.595707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.595714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.596025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.596033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.596359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.596366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.596646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.596654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.596969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.596977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.597299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.597306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.597618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-11-04 12:33:47.597624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-11-04 12:33:47.597915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.597923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.598216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.598222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.598533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.598540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.598859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.598866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.599152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.599159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.599464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.599471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.599780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.599788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.600063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.600070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.600250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.600259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.600627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.600635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.600945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.600953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.601169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.601177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.601473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.601481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.601810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.601817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.602254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.602260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.602570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.602577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.602755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.602762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.602983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.602991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.603315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.603322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.603523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.603530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.603818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.603825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.603999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.604006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.604304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.604319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.604625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.604632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.604916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.604924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.605243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.605250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.605556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.605563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.605877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.605884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.606210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.606217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.606602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.606610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.607004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.607012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.607185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.607192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.607508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.607515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.607824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.607837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.608043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.608056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.608378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.608391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.608685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.608698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.609018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-11-04 12:33:47.609031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-11-04 12:33:47.609336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.609343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.609663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.609671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.610018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.610025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.610313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.610329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.610646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.610660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.610757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.610770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.610986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.610999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.611333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.611345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.611675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.611689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.611905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.611919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.612285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.612302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.612637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.612647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.612811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.612820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.613001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.613009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.613337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.613345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.613653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.613661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.613803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.613816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.614044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.614058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.614371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.614379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.614641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.614649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.614800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.614808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.615047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.615055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.615197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.615205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.615402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.615415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.615728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.615742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.616087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.616101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.616432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.616445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.616665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.616678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.616952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.616965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.617174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.617186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.617482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.617496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.617789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.617799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.618087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.618095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.618317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.618325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.618654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.618660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.618961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.618968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-11-04 12:33:47.619235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-11-04 12:33:47.619247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.619575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.619585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.619774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.619783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.620115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.620123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.620441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.620448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.620777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.620789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.621108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.621118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.621427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.621434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.621720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.621727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.622034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.622042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.622350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.622362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.622675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.622688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.623001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.623015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.623342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.623354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.623672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.623687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.624007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.624021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.624358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.624371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.624670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.624681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.625011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.625024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.625313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.625326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.625658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.625668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.625966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.625974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.626295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.626302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.626624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.626632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.626945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.626959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.627291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.627304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.627640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.627650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.627965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.627973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.628284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.628292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.628597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.628606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.628916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.628924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.629241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.629249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.629549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.629557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-11-04 12:33:47.629865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-11-04 12:33:47.629873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.630181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.630187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.630500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.630507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.630823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.630830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.631137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.631144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.631330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.631337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.631655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.631661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.631945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.631953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.632279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.632286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.632605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.632612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.632934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.632942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.633236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.633243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.633564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.633571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.633767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.633774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.634137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.634144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.634430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.634437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.634762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.634770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.635083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.635091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.635401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.635408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.635720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.635727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.636041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.636048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.636318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.636328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.636643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.636651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.636958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.636967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.637276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.637284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.637590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.637596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.637875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.637882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.638187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.638194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.638464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.638471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.638806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.638815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.639117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.639125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.639331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.639338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.639667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.639674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.639990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.639997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.640305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.640312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.640618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.640625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.640931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.640938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.641253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-11-04 12:33:47.641261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-11-04 12:33:47.641589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.641595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.641908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.641916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.642236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.642243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.642531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.642538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.642857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.642864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.643167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.643174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.643464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.643471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.643778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.643785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.644115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.644122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.644438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.644446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.644643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.644651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.644932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.644940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.645249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.645264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.645574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.645581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.645889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.645897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.646216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.646222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.646410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.646417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.646743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.646756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.647065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.647073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.647381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.647388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.647704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.647711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.648032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.648040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.648351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.648358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.648667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.648675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.649029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.649036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.649349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.649356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.649670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.649678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.649990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.649998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.650304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.650311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.650615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.650623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.650949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.650957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.651273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.651281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.651589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.651597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.651924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.651932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.652242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.652250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.652555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.652563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-11-04 12:33:47.652861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-11-04 12:33:47.652869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.653175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.653181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.653496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.653504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.653803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.653810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.654031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.654038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.654355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.654362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.654567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.654574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.654742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.654752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.654954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.654961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.655248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.655255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.655577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.655584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.655894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.655901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.656224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.656232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.656540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.656547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.656856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.656863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.657187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.657195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.657521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.657528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.657802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.657811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.658136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.658143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.658425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.658432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.658741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.658751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.659031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.659037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.659350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.659356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.659666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.659674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.659982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.659990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.660296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.660304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.660607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.660614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.660923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.660931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.661240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.661247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.661551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.661559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.661860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.661867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.662200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.662207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.662514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.662521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.662832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.662839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.663141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.663147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.663456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.663463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.663737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.663744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.663917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.663925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.664281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-11-04 12:33:47.664290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-11-04 12:33:47.664615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.664624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.664915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.664923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.665243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.665250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.665539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.665547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.665865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.665872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.666116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.666123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.666430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.666438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.666747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.666755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.667067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.667073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.667363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.667370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.667678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.667685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.667993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.668001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.668291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.668300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.668605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.668613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.668918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.668926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.669101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.669109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.669413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.669420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.669704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.669711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.670021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.670028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.670333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.670340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.670524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.670531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.670824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.670832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.671168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.671175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.671493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.671500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.671698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.671706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.672013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.672020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.672290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.672297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.672624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.672631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.672943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.672952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.673283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.673290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.673579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.673586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.673904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.673911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.674193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.674200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.674515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.674522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.674809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.674817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.675141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.675148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.675324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.675331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.675685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-11-04 12:33:47.675692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-11-04 12:33:47.675992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.675999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.676309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.676316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.676608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.676615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.676908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.676916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.677238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.677246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.677559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.677567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.677867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.677874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.678244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.678250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.678583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.678590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.678914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.678921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.679217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.679224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.679536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.679542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.679850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.679858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.680163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.680170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.680482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.680489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.680823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.680830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.681136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.681151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.681485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.681492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.681799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.681806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.682150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.682157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.682441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.682448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.682756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.682764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.683085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.683093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.683398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.683405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.683705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.683712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.683900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.683908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.684144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.684151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.684449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.684456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.684761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.684769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.685069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.685077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.685381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.685388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-11-04 12:33:47.685698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-11-04 12:33:47.685705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.686020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.686028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.686347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.686355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.686665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.686673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.686982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.686990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.687271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.687279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.687564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.687572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.687884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.687892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.688215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.688223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.688511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.688519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.688815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.688823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.689123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.689129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.689420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.689427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.689741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.689754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.690056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.690064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.690450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.690457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.690768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.690776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.691048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.691054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.691356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.691363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.691679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.691686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.691994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.692002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.692296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.692304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.692610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.692617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.692808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.692815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.693046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.693053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.693313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.693320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.693523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.693531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.693720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.693727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.693998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.694005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.694321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.694329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.694534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.694542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.694852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.694859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.695101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.695108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.695392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.695399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.695687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-11-04 12:33:47.695694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-11-04 12:33:47.695863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.695871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.696192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.696198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.696484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.696491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.696818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.696825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.697032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.697039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.697347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.697354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.697753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.697760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.698031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.698038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.698367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.698373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.698579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.698587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.698878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.698886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.699081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.699087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.699411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.699418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.699776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.699783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.700072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.700079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.700382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.700389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.700681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.700689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.701012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.701019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.701329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.701336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.701705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.701711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.701985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.701992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.702321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.702327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.702688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.702695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.703001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.703008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.703353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.703361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.703733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.703741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.704035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.704043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.704346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.704354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.704679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.704687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.704987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.704995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.705318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.705325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.705509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.705519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.705743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.705754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-11-04 12:33:47.706039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-11-04 12:33:47.706045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.706345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.706360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.706661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.706668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.707063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.707070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.707244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.707252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.707561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.707568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.707921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.707928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.708142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.708149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.708453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.708459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.708792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.708799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.709099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.709106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.709412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.709420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.709728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.709736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.710033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.710041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.710349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.710356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.710684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.710690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.710881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.710888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.711119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.711126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.711332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.711339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.711631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.711638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.711951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.711958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.712277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.712284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.712609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.712615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.712780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.712788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.713094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.713102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.713420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.713427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.713588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.713595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.713879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.713886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.714204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.714210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.714596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.714603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.714877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.714884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.715273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.715279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.715557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.715565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.715862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.715869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.716202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.716210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-11-04 12:33:47.716535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-11-04 12:33:47.716542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.716863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.716871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.717178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.717185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.717459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.717467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.717755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.717763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.718051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.718058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.718374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.718381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.718772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.718779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.719102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.719109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.719434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.719441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.719748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.719756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.720079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.720087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.720396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.720404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.720620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.720626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.720852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.720859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.721214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.721220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.721509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.721516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.721819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.721826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.722140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.722148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.722536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.722543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.722838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.722845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.723160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.723167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.723473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.723480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.723677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.723684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.724005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.724012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.724311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.724318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.724680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.724687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.724974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.724982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.725302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.725309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.725601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.725608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.725916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.725923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-11-04 12:33:47.726229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-11-04 12:33:47.726237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.726525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.726532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.726816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.726823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.727137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.727144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.727451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.727458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.727730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.727737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.728011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.728019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.728321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.728330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.728634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.728641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.728949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.728957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.729249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.729257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.729455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.729462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.729731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.729740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.730048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.730056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.730362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.730370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.730678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.730685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.730990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.730999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.731305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.731312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.731612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.731620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.731935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.731943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.732245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.732253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.732557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.732564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.732850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.732859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.733050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.733057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.733327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.733333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.733696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.733703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.734073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.734080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.734392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.734399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.734689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.734697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.735016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.735024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.735334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.735341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.735658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.735664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-11-04 12:33:47.735853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-11-04 12:33:47.735860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.736135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.736142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.736483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.736490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.736800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.736808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.737129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.737136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.737351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.737358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.737692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.737698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.738012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.738020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.738185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.738193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.738456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.738462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.738780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.738788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.739098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.739104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.739391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.739407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.739744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.739754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.740029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.740036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.740349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.740356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.740668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.740675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.740972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.740980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.741288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.741294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.741608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.741616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.741906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.741914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.742223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.742230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.742539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.742546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.742841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.742849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.743157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.743164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.743470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.743477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.743769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.743776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.744090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.744103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.744402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.744409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.744695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.744710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.745010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.745017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-11-04 12:33:47.745328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-11-04 12:33:47.745335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.745630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.745638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.745921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.745930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.746249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.746257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.746543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.746551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.746856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.746864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.747176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.747183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.747470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.747478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.747782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.747789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.748092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.748100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.748288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.748294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.748582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.748588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.748786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.748793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.749105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.749112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.749428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.749435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.749762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.749770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.750152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.750160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.750462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.750468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.750776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.750783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.751096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.751103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.751385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.751391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.751712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.751720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.752028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.752036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.752352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.752359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.752648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.752661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.752959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.752966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.753271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.753278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.753494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.753501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.753789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.753796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.754098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.754106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.754415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.754423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.754713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.754720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.755020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.755027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.755308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.755315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.755638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.755646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.755933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.755941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-11-04 12:33:47.756279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-11-04 12:33:47.756287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.756574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.756582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.756895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.756902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.757249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.757257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.757546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.757554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.757932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.757940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.758236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.758243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.758533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.758541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.758847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.758855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.759039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.759046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.759343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.759350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.759664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.759671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.759979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.759986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.760266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.760273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.760588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.760595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.760913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.760921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.761215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.761223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.761526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.761533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.761818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.761825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.762178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.762185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.762465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.762472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.762779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.762786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.763087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.763103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.763450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.763456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.763630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.763638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.763915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.763923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.764234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.764241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.764554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.764561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.764946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.764953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.765247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.765254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.765554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.765560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.765911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-11-04 12:33:47.765919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-11-04 12:33:47.766236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.766243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.766563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.766571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.766940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.766948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.767149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.767156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.767425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.767432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.767614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.767622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.767787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.767795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.768090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.768097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.768412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.768419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.768612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.768619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.768935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.768942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.769241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.769253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.769557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.769564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.769869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.769876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.770060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.770067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.770244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.770252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.770571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.770578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.770874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.770881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.771204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.771211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.771488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.771495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.771812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.771819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.772147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.772153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.772313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.772321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.772637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.772644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.772918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.772925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.773249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.773256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.773548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.773555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.773865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.773872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.774082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.774089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.774412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.774419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.774688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.774695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.775000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.775007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-11-04 12:33:47.775310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-11-04 12:33:47.775317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.775640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.775648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.775865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.775872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.776121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.776128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.776317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.776324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.776648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.776656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.776952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.776960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.777338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.777346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.777559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.777567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.777894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.777902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.778103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.778110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.778393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.778400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.778730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.778737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.778946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.778953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.779202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.779208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.779552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.779558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.779870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.779877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.780093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.780101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.780396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.780403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.780715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.780722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.780892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.780900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.781113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.781120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.781490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.781497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.781812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.781820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.782133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.782139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.782429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.782436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.782742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.782751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.783048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.783055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.783359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.783366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.783675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.783682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.783991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.783998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-11-04 12:33:47.784393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-11-04 12:33:47.784401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.784713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.784720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.785027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.785034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.785323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.785331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.785672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.785680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.785989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.785997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.786304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.786311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.786616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.786624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.786929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.786936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.787250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.787257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.787577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.787583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.787890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.787897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.788215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.788221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.788605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.788612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.788912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.788919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.789227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.789234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.789551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.789557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.789832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.789840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.790161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.790169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.790454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.790460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.790770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.790777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.791140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.791147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.791454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.791461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.791753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.791761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.792069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.792076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.792352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.792359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.792561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.792568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.792880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.792888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.793206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.793213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.793366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.793374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.793711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.793718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.794081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.794088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.794376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.794383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.794697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.794703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.794996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.795003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.795305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.795311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.795601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.795614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-11-04 12:33:47.795920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-11-04 12:33:47.795927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.796225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.796232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.796540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.796547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.796819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.796826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.797160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.797167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.797464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.797470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.797797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.797804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.798130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.798138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.798452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.798459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.798768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.798775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.799089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.799097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.799407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.799415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.799583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.799592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.799875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.799882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.800174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.800181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.800488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.800496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.800823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.800830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.801165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.801172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.801504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.801510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.801793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.801800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.802119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.802126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.802431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.802440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.802726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.802733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.803057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.803065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.803385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.803393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.803683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.803691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.804008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.804015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.804298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.804305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.804604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.804612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.804769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.804778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.805082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.805088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.805398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.805405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.805713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.805720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.806028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.806035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-11-04 12:33:47.806323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-11-04 12:33:47.806330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.806661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.806669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.806967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.806975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.807267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.807275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.807582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.807590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.807891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.807899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.808196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.808204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.808514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.808521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.808829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.808837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.809030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.809037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.809340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.809348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.809503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.809510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.809803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.809811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.810140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.810148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.810467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.810474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.810780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.810787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.810999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.811006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.811323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.811329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.811605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.811612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.811933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.811940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.812247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.812254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.812580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.812587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.812794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.812801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.813085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.813092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.813417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.813424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.813739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.813747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.814034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.814042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.814365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.814373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.814679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.814686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.814870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.814878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.815260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.815267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.815573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.815579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.815891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.815899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.816104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.816111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.816409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.816415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.816698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.300 [2024-11-04 12:33:47.816705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.300 qpair failed and we were unable to recover it. 00:29:13.300 [2024-11-04 12:33:47.817032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.817039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.817345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.817352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.817659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.817666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.817966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.817982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.818289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.818295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.818584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.818591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.818877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.818885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.819199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.819206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.819408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.819416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.819696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.819703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.819992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.819999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.820320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.820327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.820619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.820626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.820918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.820925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.821241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.821248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.821545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.821552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.821858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.821865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.822179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.822186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.822472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.822487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.822855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.822863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.823168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.823174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.823468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.823475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.823796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.823804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.824121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.824128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.824304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.824311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.824659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.824666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.824948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.824955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.825258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.825265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.825574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.825581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.825893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.825901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.826073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.826080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.826362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.826376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.826698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.826704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.826991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.301 [2024-11-04 12:33:47.827007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.301 qpair failed and we were unable to recover it. 00:29:13.301 [2024-11-04 12:33:47.827163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.302 [2024-11-04 12:33:47.827170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.302 qpair failed and we were unable to recover it. 00:29:13.302 [2024-11-04 12:33:47.827451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.302 [2024-11-04 12:33:47.827458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.302 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.827778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.827787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.828182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.828189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.828490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.828497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.828604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.828610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.828868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.828875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.829153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.829161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.829406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.829414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.829603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.829610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.829926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.829935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.830172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.830179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.830353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.830360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.830739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.830749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.830922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.830929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.831230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.831236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.831618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.831625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.831927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.831934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.832254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.832261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.832572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.832579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.832888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.832895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.833210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.833218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.833529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.833537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.833843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.833850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.834168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.834175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.834482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.834489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.834798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.834805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.835094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.835101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.835408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.835415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.835713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.835720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.836029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.836036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.836314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.836320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.836626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.836633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.836914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.836921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.837107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.837114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.837315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.837322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.837639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.837645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.837912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.837920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.838218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.838225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.838522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.838528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.838821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.838828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.839150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.839157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.839464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.839470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.569 [2024-11-04 12:33:47.839779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.569 [2024-11-04 12:33:47.839786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.569 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.840094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.840100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.840465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.840472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.840756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.840763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.841075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.841081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.841465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.841473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.841662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.841669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.841960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.841967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.842258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.842265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.842572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.842578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.842854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.842862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.843186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.843193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.843475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.843482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.843797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.843804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.844021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.844028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.844355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.844362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.844667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.844674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.844881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.844888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.845287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.845294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.845603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.845610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.845909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.845916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.846227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.846235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.846423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.846430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.846735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.846742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.847042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.847048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.847360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.847367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.847693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.847699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.848017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.848024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.848401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.848408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.848698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.848705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.849016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.849023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.849268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.849274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.849561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.849568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.849754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.849761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.850112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.850119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.850427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.850434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.850755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.850763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.850918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.850925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.851223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.851230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.851509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.851516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.851838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.851845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.852164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.852171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.852477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.852483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.852669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.852677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.853051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.853058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.853366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.853373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.853665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.853671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.853965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.853972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.854292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.854299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.854581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.854588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.854651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.854658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.854844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.854852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.855154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.855161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.855449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.855456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.855763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.855771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.856045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.856052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.856354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.856366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.856649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.856662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.856978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.856991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.857173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.857187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.857374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.857385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.857667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.857682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.858000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.858014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.858236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.858248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.858438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.858450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.858768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.858777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.859062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.859069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.859393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.859401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.859744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.859760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.570 qpair failed and we were unable to recover it. 00:29:13.570 [2024-11-04 12:33:47.860096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.570 [2024-11-04 12:33:47.860109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.860448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.860462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.860780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.860789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.861094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.861101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.861411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.861418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.861651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.861658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.861923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.861931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.862251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.862263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.862575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.862588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.862852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.862861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.863188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.863195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.863504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.863511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.863821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.863828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.863983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.863990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.864269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.864282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.864376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.864388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.864703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.864713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.865042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.865051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.865311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.865319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.865622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.865630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.865937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.865949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.866264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.866273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.866580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.866587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.866897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.866905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.867106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.867113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.867433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.867445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.867774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.867784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.868101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.868108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.868393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.868400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.868710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.868717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.869000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.869013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.869326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.869335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.869645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.869655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.869995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.870002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.870184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.870191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.870541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.870552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.870889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.870899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.871206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.871213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.871524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.871531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.871845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.871857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.872177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.872187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.872492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.872499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.872803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.872811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.873111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.873118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.873424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.873437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.873767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.873776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.874077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.874084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.874291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.874298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.874578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.874586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.874908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.874921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.875103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.875112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.875399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.875407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.875605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.875612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.875911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.875918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.876082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.876093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.876374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.876386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.876721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.876736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.877066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.877098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.877339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.877351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.877550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.877562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.877872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.877884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.878212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.878223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.878534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.878545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.878884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.878898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.571 qpair failed and we were unable to recover it. 00:29:13.571 [2024-11-04 12:33:47.879140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.571 [2024-11-04 12:33:47.879156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.879463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.879477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.879798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.879809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.880091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.880102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.880441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.880452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.880650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.880661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.880972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.880984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.881294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.881304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.881629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.881639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.881951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.881962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.882290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.882300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.882597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.882607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.882928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.882940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.883224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.883234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.883561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.883572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.883887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.883898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.884226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.884238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.884577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.884588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.884898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.884911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.885251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.885262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.885574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.885585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.885904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.885916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.886250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.886262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.886575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.886586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.886870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.886883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.887087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.887098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.887427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.887437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.887751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.887763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.887951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.887961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.888252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.888263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.888600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.888612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.888925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.888935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.889235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.889246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.889416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.889427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.889618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.889628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.889966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.889979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.890301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.890311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.890620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.890631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.890939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.890951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.891260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.891271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.891591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.891602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.891909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.891921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.892297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.892307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.892618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.892628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.892964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.892974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.893345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.893355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.893674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.893684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.893989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.894000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.894278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.894288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.894616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.894629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.894976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.894987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.895286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.895296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.895603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.895613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.895788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.895800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.896107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.896117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.896399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.896409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.896723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.896733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.897037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.897048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.897346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.897356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.897539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.897549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.897897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.897908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.898199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.898209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.898549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.898559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.898792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.898802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.899125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.899135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.899331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.899342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.899517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.899528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.899864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.899876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.900184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.900203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.900519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.900530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.900810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.900820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.901105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.901115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.572 qpair failed and we were unable to recover it. 00:29:13.572 [2024-11-04 12:33:47.901400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.572 [2024-11-04 12:33:47.901410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.901712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.901722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.902027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.902038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.902370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.902380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.902576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.902589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.902924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.902935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.903258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.903268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.903429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.903440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.903773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.903784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.904155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.904165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.904464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.904474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.904644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.904655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.904935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.904945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.905240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.905250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.905434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.905445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.905772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.905783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.906161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.906172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.906478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.906489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.906545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.906555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.906853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.906865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.907175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.907186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.907500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.907515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.907821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.907831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.908221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.908232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.908538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.908548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.908856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.908868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.909155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.909165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.909340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.909351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.909685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.909696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.909992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.910003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.910222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.910233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.910411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.910424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.910591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.910601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.910955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.910971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.911183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.911193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.911434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.911445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.911775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.911787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.912148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.912159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.912504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.912514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.912723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.912733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.912914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.912925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.913107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.913118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.913404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.913415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.913588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.913600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.913899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.913911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.914281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.914292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.914496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.914506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.914797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.914808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.915099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.915110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.915438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.915451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.915754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.915766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.916070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.916080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.916417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.916428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.916768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.916779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.917104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.917114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.917420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.917430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.917715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.917726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.918040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.918051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.918339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.918350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.918668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.918681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.919006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.919018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.919328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.919342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.919652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.919664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.919992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.920003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.920308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.920318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.920655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.920665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.920965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.920975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.921282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.921292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.921598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.921607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.921918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.921928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.922311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.922321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.922627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.922637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.922962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.573 [2024-11-04 12:33:47.922972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.573 qpair failed and we were unable to recover it. 00:29:13.573 [2024-11-04 12:33:47.923155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.923165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.923350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.923361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.923696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.923707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.924012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.924022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.924390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.924401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.924696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.924706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.925024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.925035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.925214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.925226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.925520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.925531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.925858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.925868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.926160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.926177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.926395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.926405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.926731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.926741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.927067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.927077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.927366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.927376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.927653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.927663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.927964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.927975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.928278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.928287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.928585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.928595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.928876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.928886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.929192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.929202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.929499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.929510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.929797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.929808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.930115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.930125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.930425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.930435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.930767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.930777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.930985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.930997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.931300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.931310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.931643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.931653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.931975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.931985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.932279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.932289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.932624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.932634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.932960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.932971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.933309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.933319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.933604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.933614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.933996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.934006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.934308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.934317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.934643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.934652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.934917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.934927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.935211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.935229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.935562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.935572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.935882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.935892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.936219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.936229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.936513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.936523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.936837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.936847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.937162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.937172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.937465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.937475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.937855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.937866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.938076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.938086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.938357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.938368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.938673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.938685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.939000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.939010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.939297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.939308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.939620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.939633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.939964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.939975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.940313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.940325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.940619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.940630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.940967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.940978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.941313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.941324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.941622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.941633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.941959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.941971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.942253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.942263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.942561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.942571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.942871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.942881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.943161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.943170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.943362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.943372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.574 [2024-11-04 12:33:47.943697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.574 [2024-11-04 12:33:47.943706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.574 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.944080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.944091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.944402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.944412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.944698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.944715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.945045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.945055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.945357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.945367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.945671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.945680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.945973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.945984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.946364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.946374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.946669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.946678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.946985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.946995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.947305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.947315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.947619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.947628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.947907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.947918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.948178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.948188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.948519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.948530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.948860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.948871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.949183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.949194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.949493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.949504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.949816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.949826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.950197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.950206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.950487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.950497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.950814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.950825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.951144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.951155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.951447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.951457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.951768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.951779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.952108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.952118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.952424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.952434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.952653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.952663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.952982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.952992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.953333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.953344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.953631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.953642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.954017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.954028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.954336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.954345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.954666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.954676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.955022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.955032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.955336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.955346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.955667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.955678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.955962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.955973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.956287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.956297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.956624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.956635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.956961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.956972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.957295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.957307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.957521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.957531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.957802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.957813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.958013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.958022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.958328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.958338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.958656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.958666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.958976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.958987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.959360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.959371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.959680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.959690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.960011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.960022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.960313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.960323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.960630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.960640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.960923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.960933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.961290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.961302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.961503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.961512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.961706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.961716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.962103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.962113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.962407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.962418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.962724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.962733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.962887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.962897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.963223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.963234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.963576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.963586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.963887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.963898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.964204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.964214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.964527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.964538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.964861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.964871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.965244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.965253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.965559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.965570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.965868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.965878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.966201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.575 [2024-11-04 12:33:47.966211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.575 qpair failed and we were unable to recover it. 00:29:13.575 [2024-11-04 12:33:47.966559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.966569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.966769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.966779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.967065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.967075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.967409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.967419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.967770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.967781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.968000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.968010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.968299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.968309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.968632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.968641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.968935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.968945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.969149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.969159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.969440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.969452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.969761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.969771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.970173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.970182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.970497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.970507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.970755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.970766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.971085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.971094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.971474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.971484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.971767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.971778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.971972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.971984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.972277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.972287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.972593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.972603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.972913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.972923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.973216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.973225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.973538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.973548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.973908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.973919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.974093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.974104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.974410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.974420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.974726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.974736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.974948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.974958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.975172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.975182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.975502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.975512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.975849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.975859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.976163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.976182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.976384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.976395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.976670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.976679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.976974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.976984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.977280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.977290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.977622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.977633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.977970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.977981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.978146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.978156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.978481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.978491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.978806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.978817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.979118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.979134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.979472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.979481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.979776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.979787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.979999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.980009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.980319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.980328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.980642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.980652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.980965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.980974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.981260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.981270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.981651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.981661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.981952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.981962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.982247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.982257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.982430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.982441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.982767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.982777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.983078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.983088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.983219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.983230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.983538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.983548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.983915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.983926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.984185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.984195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.984510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.984520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.984851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.984861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.985160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.985170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.985458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.985468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.985755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.985766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.986050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.986060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.986245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.986255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.986601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.576 [2024-11-04 12:33:47.986610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.576 qpair failed and we were unable to recover it. 00:29:13.576 [2024-11-04 12:33:47.986926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.986936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.987261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.987271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.987569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.987579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.987778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.987789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.988116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.988126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.988403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.988414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.988722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.988732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.989132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.989143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.989438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.989448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.989834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.989845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.990151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.990161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.990490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.990499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.990801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.990811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.991099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.991109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.991388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.991398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.991709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.991719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.992064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.992075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.992377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.992387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.992704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.992714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.993084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.993095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.993372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.993382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.993692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.993703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.993997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.994007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.994334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.994344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.994673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.994683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.994989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.995000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.995306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.995317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.995607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.995618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.995856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.995867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.996099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.996110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.996409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.996420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.996715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.996726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.997043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.997054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.997356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.997366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.997671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.997681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.997993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.998004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.998336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.998347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.998663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.998678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.998995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.999006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.999303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.999314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.999621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.999632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:47.999924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:47.999935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.000245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.000255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.000560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.000571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.000879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.000891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.001206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.001217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.001559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.001570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.001882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.001893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.002194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.002205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.002507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.002518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.002811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.002822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.003105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.003116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.003406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.003416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.003702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.003713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.004033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.004045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.004351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.004362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.004657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.004668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.004987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.004998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.005318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.005329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.005655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.005666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.005978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.005989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.006295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.006306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.006640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.006651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.006845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.006857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.007160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.007172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.007455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.007472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.007804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.007815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.008132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.008142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.008426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.008436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.008741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.008754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.009035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.009045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.009325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.009335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.009644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.009653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.577 qpair failed and we were unable to recover it. 00:29:13.577 [2024-11-04 12:33:48.009941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.577 [2024-11-04 12:33:48.009952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.010244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.010255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.010556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.010567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.010895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.010905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.011238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.011248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.011557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.011568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.011875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.011885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.012193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.012203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.012511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.012521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.012812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.012822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.013120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.013137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.013455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.013465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.013767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.013777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.014071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.014081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.014382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.014392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.014691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.014701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.015000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.015011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.015354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.015364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.015649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.015667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.015975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.015985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.016288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.016298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.016600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.016610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.016916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.016927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.017245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.017255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.017547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.017557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.017867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.017878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.018197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.018207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.018509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.018519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.018825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.018835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.019137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.019148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.019454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.019463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.019762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.019773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.020099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.020110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.020413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.020423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.020708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.020718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.021045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.021056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.021352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.021363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.021696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.021707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.022011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.022022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.022330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.022341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.022619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.022630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.022906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.022917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.023290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.023301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.023596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.023607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.023790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.023801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.024083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.024092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.024461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.024471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.024775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.024786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.025083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.025093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.025421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.025431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.025734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.025744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.026068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.026078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.026357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.026367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.026635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.026645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.026958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.026969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.027342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.027352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.027657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.027667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.027962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.027972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.028259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.028269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.028563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.028575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.028891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.028902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.029195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.029205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.029588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.029598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.029796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.029806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.030190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.030201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.030550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.030560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.030850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.030861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.031171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.031182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.031491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.031500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.031782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.031792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.032141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.032151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.032433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.032443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.032632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.032642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.032824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.578 [2024-11-04 12:33:48.032834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.578 qpair failed and we were unable to recover it. 00:29:13.578 [2024-11-04 12:33:48.033185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.033194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.033500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.033510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.033805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.033815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.034191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.034202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.034552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.034563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.034887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.034897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.035287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.035297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.035525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.035535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.035711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.035721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.036100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.036112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.036419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.036429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.036725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.036735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.036950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.036962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.037285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.037295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.037573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.037582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.037799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.037810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.038180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.038190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.038484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.038494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.038708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.038718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.039023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.039033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.039318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.039329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.039528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.039539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.039799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.039811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.040127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.040137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.040439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.040449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.040753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.040763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.041085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.041095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.041388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.041398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.041598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.041608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.041944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.041954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.042279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.042289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.042634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.042644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.042855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.042866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.043104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.043115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.043428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.043438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.043601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.043612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.043940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.043951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.044276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.044286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.044576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.044586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.044786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.044798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.045133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.045143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.045341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.045351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.045552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.045562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.045827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.045837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.046129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.046139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.046498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.046508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.046798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.046808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.047139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.047149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.047451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.047460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.047764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.047774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.048063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.048074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.048374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.048384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.048682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.048691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.049055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.049066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.049427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.049438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.049634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.049645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.049963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.049973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.050278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.050288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.050596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.050606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.050941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.050951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.051260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.051270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.051561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.051571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.579 [2024-11-04 12:33:48.051848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.579 [2024-11-04 12:33:48.051858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.579 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.052209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.052219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.052499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.052509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.052669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.052679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.052919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.052929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.053239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.053249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.053545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.053555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.053871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.053882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.054225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.054236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.054551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.054560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.054866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.054876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.055200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.055209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.055496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.055506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.055827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.055838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.056153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.056163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.056323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.056334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.056646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.056656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.057050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.057060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.057369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.057379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.057691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.057701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.058011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.058021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.058300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.058310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.058537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.058546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.058846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.058856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.059200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.059209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.059553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.059563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.059768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.059778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.059985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.059995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.060341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.060351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.060555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.060565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.060749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.060761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.061065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.061076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.061405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.061415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.061715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.061726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.062069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.062080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.062274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.062284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.062502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.062512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.062697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.062707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.062907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.062917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.063264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.063274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.063592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.063602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.063902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.063912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.064199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.064208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.064515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.064525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.064892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.064903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.065198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.065210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.065506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.065517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.065833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.065844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.066137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.066148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.066415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.066425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.066710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.066721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.067062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.067073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.067368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.067378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.067694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.067704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.067898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.067909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.068270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.068279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.068466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.068476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.068782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.068793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.069109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.069118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.069432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.069442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.069724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.069734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.070088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.070098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.070398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.070408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.070790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.070800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.071079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.071089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.071409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.071419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.071781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.071793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.072113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.072123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.072498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.072508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.072775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.072785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.073072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.580 [2024-11-04 12:33:48.073081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.580 qpair failed and we were unable to recover it. 00:29:13.580 [2024-11-04 12:33:48.073249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.073260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.073636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.073648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.073970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.073980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.074293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.074303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.074678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.074689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.074899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.074909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.075108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.075118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.075399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.075409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.075711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.075721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.076082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.076093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.076399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.076409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.076708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.076718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.076906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.076917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.077253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.077263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.077443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.077454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.077761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.077771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.078087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.078097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.078401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.078412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.078718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.078729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.079018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.079029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.079334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.079344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.079646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.079657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.079971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.079981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.080291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.080301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.080615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.080626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.080910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.080921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.081234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.081245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.081574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.081584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.081881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.081891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.082206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.082225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.082430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.082440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.082657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.082668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.082980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.082992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.083314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.083324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.083617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.083627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.083958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.083968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.084278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.084288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.084571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.084582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.084895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.084906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.085214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.085224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.085554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.085565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.085875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.085885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.086187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.086198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.086498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.086508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.086893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.086903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.087221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.087231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.087516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.087526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.087838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.087849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.088155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.088165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.088472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.088482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.088789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.088799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.089078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.089088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.089381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.089392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.089698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.089708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.090007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.090017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.090315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.090325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.090633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.090643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.090914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.090925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.091223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.091239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.091553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.581 [2024-11-04 12:33:48.091563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.581 qpair failed and we were unable to recover it. 00:29:13.581 [2024-11-04 12:33:48.091848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.091858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.092187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.092198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.092499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.092509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.092809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.092819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.093118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.093134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.093449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.093459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.093759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.093769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.094082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.094092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.094407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.094417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.094722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.094735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.095061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.095072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.095371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.095381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.095692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.095703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.096004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.096015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.096313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.096324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.096623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.096634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.096955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.096966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.097268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.097278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.097583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.097594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.097955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.097966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.098287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.098298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.098524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.098535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.098855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.098866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.099183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.099193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.099522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.099532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.099837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.099848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.100147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.100157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.100453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.100463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.100783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.100793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.101085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.101096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.101413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.101422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.101798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.101809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.102018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.102028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.102361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.102371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.102705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.102715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.102998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.103009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.103321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.103334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.103616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.103626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.103910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.103920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.104235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.104245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.104575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.104585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.104894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.104904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.105211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.105221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.105596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.105606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.105911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.105922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.106233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.106243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.106523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.106534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.106871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.106885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.107258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.107269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.107607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.107618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.107952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.107963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.108297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.108312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.108654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.108667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.109023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.109034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.109205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.109217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.109448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.109459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.109790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.109801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.110109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.110120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.110408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.110419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.110731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.110743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.111062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.111074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.582 [2024-11-04 12:33:48.111405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.582 [2024-11-04 12:33:48.111416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.582 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.111742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.111759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.112079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.112093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.112296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.112307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.112647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.112663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.112886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.112898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.113113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.113124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.113338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.113349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.113669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.113680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.113989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.114000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.114326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.114338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.114708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.114720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.115026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.115038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.115295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.115306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.115640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.115651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.115999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.116010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.116323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.116334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.116636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.116647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.116967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.116979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.117326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.117337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.117640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.117652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.117985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.117997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.118309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.118320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.118632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.118643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.119027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.119039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.119350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.119361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.119687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.119699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.120070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.120082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.120386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.120397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.120734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.120753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.120854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.120868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.121224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.121235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.121552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.121564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.121888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.121900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.122256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.122267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.122567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.583 [2024-11-04 12:33:48.122578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.583 qpair failed and we were unable to recover it. 00:29:13.583 [2024-11-04 12:33:48.122797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.122809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.123096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.123106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.123446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.123459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.123744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.123764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.124089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.124099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.124496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.124506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.124814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.124824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.125033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.125048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.125276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.125287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.125606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.125617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.125927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.125937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.126246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.126257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.126595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.126605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.126935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.126946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.127252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.127262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.127601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.127611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.127916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.127927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.128291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.128301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.128638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.128648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.128974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.128987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.129329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.129339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.584 [2024-11-04 12:33:48.129624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.584 [2024-11-04 12:33:48.129634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.584 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.129838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.129850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.130180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.130191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.130524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.130534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.130916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.130927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.131140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.131150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.131434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.131444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.131771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.131782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.132131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.132141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.132465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.132477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.132806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.132818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.133214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.133225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.133545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.133556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.133861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.133874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.134192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.134203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.134578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.134589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.134887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.134898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.135192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.135203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.135419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.135430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.135600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.135610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.135663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.135674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.135965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.135977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.136279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.136290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.136617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.136628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.136820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.136831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-11-04 12:33:48.137139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.859 [2024-11-04 12:33:48.137151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.137469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.137481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.137564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.137576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.137869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.137881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.138245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.138256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.138540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.138551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.138894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.138906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.139219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.139231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.139534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.139546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.139903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.139915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.140118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.140129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.140310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.140321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.140636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.140647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.140978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.140990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.141199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.141211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.141544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.141559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.141729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.141741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.141930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.141941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.142219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.142229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.142556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.142566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.142877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.142888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.143213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.143223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.143414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.143424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.143776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.143787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.144100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.144110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.144355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.144365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.144676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.144686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.144893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.144904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.145260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.145271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.145612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.145623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.145823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.145834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.146036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.146046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.146312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.146322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.146593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.146603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.146896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.146907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.147195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.147206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.147408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.147419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.147733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.147752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.148009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.860 [2024-11-04 12:33:48.148019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-11-04 12:33:48.148235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.148245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.148500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.148511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.148813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.148824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.149120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.149131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.149443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.149453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.149760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.149770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.150117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.150128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.150413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.150423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.150735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.150752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.151034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.151052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.151247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.151257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.151588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.151599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.151937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.151947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.152249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.152259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.152589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.152601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.152902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.152912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.153226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.153238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.153522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.153541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.153857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.153868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.154167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.154185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.154506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.154516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.154831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.154842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.155035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.155046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.155373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.155383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.155600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.155614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.155798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.155810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.155984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.155995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.156273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.156289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.156602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.156613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.156914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.156925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.157218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.157229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.157544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.157556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.157867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.157879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.158204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.158215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.158602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.158613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.158924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.158935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.159239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.159250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.159568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.159578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.159883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.861 [2024-11-04 12:33:48.159895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.861 qpair failed and we were unable to recover it. 00:29:13.861 [2024-11-04 12:33:48.160183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.160193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.160508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.160519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.160803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.160814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.161124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.161135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.161446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.161461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.161769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.161783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.162070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.162080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.162388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.162399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.162583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.162593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.162903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.162915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.163226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.163237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.163558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.163569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.163885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.163896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.164229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.164240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.164578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.164591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.164905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.164916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.165210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.165220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.165526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.165536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.165830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.165841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.166146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.166156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.166473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.166482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.166767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.166777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.167078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.167088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.167401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.167411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.167698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.167709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.168009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.168020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.168319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.168329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.168610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.168620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.168926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.168937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.169247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.169257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.169478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.169488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.169808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.169818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.170016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.170029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.170313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.170323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.170613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.170622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.170963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.170973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.171204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.171214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.171540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.171550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.862 [2024-11-04 12:33:48.171829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.862 [2024-11-04 12:33:48.171839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.862 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.172138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.172148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.172473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.172483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.172805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.172815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.173109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.173120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.173421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.173431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.173762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.173773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.174091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.174100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.174404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.174415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.174753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.174763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.175071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.175081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.175386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.175396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.175680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.175698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.175891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.175901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.176229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.176239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.176572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.176581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.176880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.176898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.177207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.177216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.177515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.177525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.177861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.177871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.178147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.178157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.178494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.178505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.178786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.178796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.179109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.179119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.179444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.179455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.179762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.179773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.180091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.180100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.180399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.180409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.180743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.180757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.181063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.181072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.181384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.181394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.181679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.181688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.863 [2024-11-04 12:33:48.182002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.863 [2024-11-04 12:33:48.182012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.863 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.182308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.182318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.182644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.182653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.183017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.183028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.183339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.183350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.183644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.183654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.183971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.183981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.184274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.184284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.184553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.184563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.184881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.184892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.185202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.185213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.185488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.185499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.185830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.185840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.186125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.186134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.186461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.186471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.186776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.186786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.187121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.187130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.187466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.187476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.187789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.187800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.188113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.188123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.188403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.188413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.188697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.188707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.189021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.189031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.189235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.189245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.189589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.189600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.189816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.189827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.189910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.189920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.190096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.190106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.190423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.190432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.190716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.190726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.191061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.191072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.191275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.191285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.191589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.191599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.191961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.191972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.192243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.192252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.192574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.192583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.192945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.192956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.193260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.193269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.864 qpair failed and we were unable to recover it. 00:29:13.864 [2024-11-04 12:33:48.193414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.864 [2024-11-04 12:33:48.193425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.193701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.193711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.193886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.193897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.194192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.194201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.194487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.194503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.194855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.194865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.195077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.195087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.195263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.195273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.195546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.195555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.195861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.195871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.196113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.196123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.196442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.196452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.196745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.196763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.197081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.197090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.197258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.197268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.197600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.197610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.197917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.197927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.198230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.198240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.198575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.198585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.198918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.198931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.199233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.199243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.199426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.199436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.199879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.199890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.200155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.200165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.200351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.200363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.200642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.200652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.200885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.200895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.201146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.201156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.201479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.201489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.201781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.201791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.202011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.202021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.202350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.202360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.202579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.202589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.202871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.202881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.203269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.203280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.203633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.203642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.203813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.203824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.204160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.204170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.204464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.865 [2024-11-04 12:33:48.204473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.865 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-04 12:33:48.204772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.204782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.205086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.205097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.205402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.205412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.205719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.205729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.206042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.206053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.206359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.206369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.206674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.206684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.206991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.207003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.207302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.207313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.207625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.207635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.207932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.207942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.208242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.208252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.208559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.208569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.208868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.208879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.209199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.209209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.209511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.209521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.209804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.209814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.210110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.210121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.210425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.210434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.210743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.210756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.211060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.211070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.211390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.211406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.211724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.211734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.212040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.212050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.212361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.212371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.212654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.212664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.213001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.213012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.213323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.213333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.213619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.213629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.213917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.213927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.214120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.214130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.214448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.214457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.214762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.214772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.215062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.215072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.215404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.215414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.215698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.215708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.216011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.216022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.216320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.216330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-04 12:33:48.216635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.866 [2024-11-04 12:33:48.216645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.216966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.216976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.217269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.217279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.217606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.217616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.217842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.217852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.218177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.218187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.218486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.218497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.218800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.218819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.219120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.219130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.219433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.219442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.219751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.219762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.220060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.220070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.220372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.220382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.220683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.220693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.220992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.221002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.221306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.221316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.221619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.221628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.221907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.221917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.222233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.222243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.222554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.222564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.222888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.222898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.223210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.223220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.223522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.223531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.223815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.223825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.224129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.224140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.224446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.224457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.224789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.224799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.225097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.225106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.225407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.225417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.225722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.225732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.226057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.226067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.226389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.226398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.226703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.226713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.227014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.227025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.227367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.227377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.227663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.227674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.227973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.227983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.228163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.228176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.228477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.867 [2024-11-04 12:33:48.228487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-04 12:33:48.228809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.228819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.229130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.229140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.229420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.229429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.229741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.229755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.230080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.230091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.230427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.230436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.230739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.230752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.231086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.231096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.231425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.231434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.231745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.231758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.232036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.232046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.232342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.232352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.232620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.232630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.232930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.232940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.233245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.233255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.233564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.233574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.233863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.233873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.234179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.234189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.234374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.234384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.234734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.234745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.235079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.235090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.235389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.235400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.235707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.235716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.236021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.236031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.236347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.236357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.236664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.236676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.236986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.236996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.237274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.237284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.237592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.237602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.237934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.237944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.238248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.238258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.238443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.238453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.238775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.238785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.239120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.239130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.868 [2024-11-04 12:33:48.239440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.868 [2024-11-04 12:33:48.239451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.868 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.239783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.239793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.240113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.240122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.240407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.240424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.240738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.240752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.241060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.241070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.241381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.241390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.241671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.241689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.242013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.242024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.242323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.242333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.242646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.242656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.242996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.243007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.243304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.243314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.243600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.243618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.243913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.243924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.244282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.244292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.244631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.244642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.244968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.244978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.245258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.245278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.245570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.245580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.245799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.245809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.246118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.246128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.246462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.246472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.246756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.246766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.247091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.247101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.247382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.247392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.247739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.247754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.248057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.248066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.248362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.248372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.248691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.248700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.249061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.249071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.249372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.249382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.249722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.249732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.250067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.250078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.250411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.250420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.250807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.250818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.251147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.251157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.251438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.869 [2024-11-04 12:33:48.251448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.869 qpair failed and we were unable to recover it. 00:29:13.869 [2024-11-04 12:33:48.251766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.251776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.251975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.251985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.252388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.252398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.252730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.252740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.252973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.252983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.253183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.253193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.253507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.253517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.253672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.253682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.253997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.254008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.254326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.254336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.254625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.254635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.254966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.254976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.255259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.255277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.255631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.255640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.255952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.255962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.256281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.256291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.256606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.256615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.256928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.256937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.257235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.257244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.257547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.257556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.257845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.257855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.258166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.258176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.258480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.258491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.258828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.258838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.259121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.259139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.259474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.259484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.259772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.259782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.260003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.260013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.260319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.260329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.260640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.260650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.260974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.260984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.261284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.261294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.261608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.261618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.261904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.261914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.262265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.262275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.262575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.262585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.262954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.262965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.263269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.263279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.263560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.870 [2024-11-04 12:33:48.263570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.870 qpair failed and we were unable to recover it. 00:29:13.870 [2024-11-04 12:33:48.263883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.263894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.264199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.264209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.264535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.264545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.264916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.264925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.265095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.265105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.265493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.265503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.265812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.265823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.266116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.266132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.266318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.266330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.266694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.266706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.266994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.267010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.267338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.267348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.267647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.267657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.267980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.267990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.268360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.268370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.268683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.268693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.268996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.269006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.269331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.269341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.269644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.269654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.269962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.269972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.270257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.270267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.270532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.270542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.270837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.270848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.271189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.271200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.271555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.271565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.271872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.271883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.272089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.272098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.272417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.272426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.272728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.272738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.273030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.273040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.273346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.273356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.273647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.273657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.273946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.273956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.274273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.274282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.274581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.274591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.274886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.274896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.275091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.275102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.275445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.275455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-04 12:33:48.275644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.871 [2024-11-04 12:33:48.275655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.275995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.276006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.276192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.276202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.276548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.276558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.276752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.276762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.277071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.277081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.277396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.277405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.277713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.277723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.278024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.278035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.278405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.278415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.278608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.278619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.278949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.278959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.279178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.279188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.279488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.279498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.279803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.279813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.280121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.280131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.280442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.280452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.280756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.280766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.281073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.281083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.281391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.281400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.281678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.281688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.281984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.281995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.282293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.282302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.282620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.282630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.282949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.282960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.283243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.283253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.283558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.283567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.283848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.283858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.284169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.284180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.284486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.284496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.284836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.284846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.285188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.285198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.285478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.285488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.285789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.285799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.286117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.286127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.286435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.286445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.286757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.286767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.287077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.872 [2024-11-04 12:33:48.287088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-04 12:33:48.287393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.287403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.287715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.287725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.288000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.288010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.288323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.288333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.288619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.288628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.288841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.288851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.289037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.289048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.289404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.289415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.289693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.289703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.290017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.290028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.290339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.290349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.290634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.290644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.290977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.290987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.291285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.291294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.291624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.291633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.292021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.292031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.292346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.292356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.292687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.292697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.293008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.293018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.293397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.293407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.293681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.293691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.293993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.294003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.294318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.294328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.294634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.294644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.294960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.294971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.295275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.295285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.295563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.295574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.295872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.295882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.296171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.296183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.296471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.296482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.296789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.296799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.297088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.297097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-04 12:33:48.297387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.873 [2024-11-04 12:33:48.297397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.297716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.297726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.298030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.298041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.298345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.298355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.298659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.298669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.298969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.298987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.299314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.299324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.299616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.299627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.299960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.299970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.300253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.300264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.300572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.300582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.300880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.300890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.301182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.301192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.301503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.301513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.301820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.301830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.302125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.302143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.302441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.302450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.302754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.302764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.303081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.303090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.303380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.303389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.303689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.303698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.304068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.304078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.304383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.304393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.304694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.304706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.305011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.305021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.305332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.305343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.305650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.305661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.305834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.305844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.306035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.306045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.306419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.306429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.306711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.306721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.307026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.307036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.307341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.307351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.307656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.307666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.307959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.307970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.308288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.308298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.308590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.308601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.874 [2024-11-04 12:33:48.308925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.874 [2024-11-04 12:33:48.308935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.874 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.309220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.309229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.309512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.309522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.309834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.309845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.310199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.310209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.310496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.310514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.310819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.310829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.311128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.311138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.311427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.311437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.311682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.311692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.311866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.311876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.312252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.312262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.312545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.312556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.312879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.312894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.313082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.313091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.313414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.313424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.313723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.313733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.314072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.314082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.314272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.314282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.314625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.314635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.314923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.314934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.315240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.315250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.315531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.315540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.315865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.315875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.316189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.316207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.316395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.316406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.316615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.316625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.316941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.316951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.317301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.317311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.317588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.317598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.317901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.317911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.318280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.318290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.318503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.318513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.318835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.318845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.319139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.319149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.319465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.319474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.319771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.319781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.320121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.320131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.320303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.320314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.875 qpair failed and we were unable to recover it. 00:29:13.875 [2024-11-04 12:33:48.320630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.875 [2024-11-04 12:33:48.320640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.320923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.320933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.321226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.321235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.321540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.321550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.321853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.321863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.322148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.322157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.322462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.322472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.322774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.322784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.323174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.323183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.323515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.323524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.323838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.323848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.324158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.324167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.324472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.324481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.324782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.324792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.325098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.325108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.325291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.325304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.325647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.325658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.325966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.325977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.326170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.326181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.326500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.326510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.326789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.326800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.327003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.327013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.327247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.327256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.327432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.327442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.327770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.327780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.328077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.328086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.328286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.328296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.328501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.328511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.328788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.328798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.329128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.329138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.329343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.329353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.329616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.329625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.329920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.329931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.330238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.330248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.330552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.330562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.330883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.330893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.331197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.331207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.331519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.331528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.331818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.876 [2024-11-04 12:33:48.331828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.876 qpair failed and we were unable to recover it. 00:29:13.876 [2024-11-04 12:33:48.332150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.332160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.332461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.332471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.332840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.332850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.333171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.333182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.333461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.333471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.333757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.333767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.334078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.334087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.334430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.334439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.334728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.334737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.335021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.335031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.335349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.335359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.335626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.335636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.335833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.335843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.336161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.336171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.336486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.336495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.336814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.336825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.337141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.337151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.337477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.337486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.337822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.337832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.338153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.338162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.338484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.338493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.338697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.338707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.338997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.339007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.339319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.339330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.339624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.339634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.339850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.339860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.340253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.340263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.340573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.340584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.340891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.340901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.341191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.341200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.341509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.341521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.341830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.341840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.342127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.342138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.342447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.877 [2024-11-04 12:33:48.342457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.877 qpair failed and we were unable to recover it. 00:29:13.877 [2024-11-04 12:33:48.342763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.342773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.343088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.343097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.343400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.343409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.343810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.343820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.344127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.344137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.344450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.344459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.344770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.344780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.345078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.345087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.345393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.345403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.345722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.345732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.346070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.346081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.346410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.346421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.346726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.346736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.347116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.347126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.347408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.347418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.347725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.347735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.348043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.348053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.348355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.348364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.348675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.348685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.348989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.348999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.349300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.349309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.349633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.349643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.349996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.350006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.350287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.350297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.350680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.350690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.351005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.351016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.351329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.351339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.351643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.351653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.351847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.351857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.352261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.352272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.352578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.352588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.352887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.352897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.353202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.353212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.353519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.353529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.353822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.353832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.354147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.354157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.354470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.354480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.354765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.354775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.878 qpair failed and we were unable to recover it. 00:29:13.878 [2024-11-04 12:33:48.355042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.878 [2024-11-04 12:33:48.355052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.355369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.355378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.355700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.355710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.356030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.356041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.356248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.356258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.356526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.356536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.356707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.356725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.356941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.356953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.357238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.357248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.357581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.357592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.357996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.358006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.358294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.358304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.358473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.358484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.358873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.358884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.359179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.359189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.359481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.359492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.359736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.359753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.359960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.359971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.360263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.360272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.360579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.360589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.360958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.360968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.361301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.361312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.361479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.361488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.361582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.361592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.361786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.361797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.362152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.362162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.362366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.362379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.362722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.362734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.363057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.363068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.363379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.363389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.363759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.363771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.364070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.364080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.364281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.364292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.364481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.364491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.364818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.364828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.365141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.365151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.365474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.365484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.879 qpair failed and we were unable to recover it. 00:29:13.879 [2024-11-04 12:33:48.365790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.879 [2024-11-04 12:33:48.365801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.366115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.366125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.366412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.366423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.366735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.366753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.367089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.367100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.367371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.367381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.367684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.367695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.368000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.368010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.368297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.368308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.368708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.368718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.369026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.369037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.369388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.369398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.369699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.369710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.369998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.370009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.370381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.370391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.370597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.370608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.370807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.370820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.371026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.371036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.371254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.371264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.371535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.371545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.371854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.371865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.372152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.372162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.372463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.372474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.372762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.372772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.373063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.373073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.373388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.373399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.373774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.373785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.374120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.374130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.374431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.374441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.374626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.374646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.374853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.374864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.375039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.375051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.375241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.375251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.375541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.375551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.375854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.375868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.376186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.376196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.376525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.880 [2024-11-04 12:33:48.376536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.880 qpair failed and we were unable to recover it. 00:29:13.880 [2024-11-04 12:33:48.376822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.376834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.377158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.377168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.377464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.377475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.377815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.377826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.378206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.378216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.378519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.378530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.378729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.378741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.379083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.379094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.379400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.379410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.379729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.379739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.380030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.380040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.380370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.380380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.380691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.380702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.381038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.381050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.381371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.381381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.381683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.381693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.382036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.382047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.382338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.382348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.382640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.382649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.383062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.383072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.383389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.383399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.383682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.383693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.384002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.384013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.384317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.384327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.384509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.384520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.384898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.384909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.385222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.385232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.385540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.385551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.385861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.385872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.386174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.386189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.386488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.386498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.386831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.386843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.387154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.387166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.387480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.387491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.387836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.387848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.388012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.388027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.388299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.881 [2024-11-04 12:33:48.388310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.881 qpair failed and we were unable to recover it. 00:29:13.881 [2024-11-04 12:33:48.388652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.388663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.388963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.388974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.389291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.389302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.389618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.389628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.389913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.389924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.390226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.390236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.390556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.390568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.390892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.390903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.391243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.391254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.391567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.391578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.391915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.391929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.392122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.392135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.392465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.392475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.392776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.392788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.392986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.392996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.393183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.393195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.393571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.393581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.393867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.393879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.394234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.394247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.394533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.394546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.394835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.394848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.395175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.395185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.395499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.395509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.395854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.395866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.396197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.396208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.396519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.396530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.396841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.396852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.397175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.397186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.397503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.397517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.397836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.397847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.398176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.398186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.398514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.398524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.882 [2024-11-04 12:33:48.398845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.882 [2024-11-04 12:33:48.398856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.882 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.399246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.399256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.399534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.399543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.399859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.399869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.400180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.400189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.400473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.400485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.400773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.400783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.401084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.401094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.401396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.401406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.401706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.401716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.402016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.402027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.402368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.402379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.402682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.402692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.403018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.403028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.403329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.403339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.403649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.403659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.403960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.403970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.404254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.404263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.404558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.404568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.404853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.404863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.405179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.405190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.405466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.405477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.405808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.405819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.406120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.406130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.406439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.406449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.406834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.406845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.407227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.407237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.407534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.407544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.407813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.407823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.408138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.408148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.408471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.408480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.408671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.408681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.409017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.409030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.409342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.409352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.409541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.409552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.409863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.409873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.410210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.410220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.883 qpair failed and we were unable to recover it. 00:29:13.883 [2024-11-04 12:33:48.410498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.883 [2024-11-04 12:33:48.410508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.884 qpair failed and we were unable to recover it. 00:29:13.884 [2024-11-04 12:33:48.410822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.884 [2024-11-04 12:33:48.410832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.884 qpair failed and we were unable to recover it. 00:29:13.884 [2024-11-04 12:33:48.411120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.884 [2024-11-04 12:33:48.411130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.884 qpair failed and we were unable to recover it. 00:29:13.884 [2024-11-04 12:33:48.411418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.884 [2024-11-04 12:33:48.411429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.884 qpair failed and we were unable to recover it. 00:29:13.884 [2024-11-04 12:33:48.411742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.884 [2024-11-04 12:33:48.411766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.884 qpair failed and we were unable to recover it. 00:29:13.884 [2024-11-04 12:33:48.412074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.884 [2024-11-04 12:33:48.412084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.884 qpair failed and we were unable to recover it. 00:29:13.884 [2024-11-04 12:33:48.412454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.884 [2024-11-04 12:33:48.412464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:13.884 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.412763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.412775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.158 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.413054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.413065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.158 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.413359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.413368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.158 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.413670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.413680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.158 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.413996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.414006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.158 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.414293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.414303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.158 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.414602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.414613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.158 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.414925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.414935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.158 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.415248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.415259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.158 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.415605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.415614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.158 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.415906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.415916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.158 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.416116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.416131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.158 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.416472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.416482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.158 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.416789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.416799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.158 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.417105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.417115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.158 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.417407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.417417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.158 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.417727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.417737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.158 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.418114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.418124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.158 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.418428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.418439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.158 qpair failed and we were unable to recover it. 00:29:14.158 [2024-11-04 12:33:48.418779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.158 [2024-11-04 12:33:48.418790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.419083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.419092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.419380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.419390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.419696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.419706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.420017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.420027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.420340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.420350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.420756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.420766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.421032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.421042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.421394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.421403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.421701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.421711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.422020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.422031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.422248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.422258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.422570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.422580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.422924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.422934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.423214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.423223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.423528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.423537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.423810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.423820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.424105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.424115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.424394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.424403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.424706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.424716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.425012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.425022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.425325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.425334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.425649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.425659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.425893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.425903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.426168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.426177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.426536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.426547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.426751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.426762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.427091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.427100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.427406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.427415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.427721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.427730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.428006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.428016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.428434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.428444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.428725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.428735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.428938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.428948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.429252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.429261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.429539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.429549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.429859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.159 [2024-11-04 12:33:48.429870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.159 qpair failed and we were unable to recover it. 00:29:14.159 [2024-11-04 12:33:48.430144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.430156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.430458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.430469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.430806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.430816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.431124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.431134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.431452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.431461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.431778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.431788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.432101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.432110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.432422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.432432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.432704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.432715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.433039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.433050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.433388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.433399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.433705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.433716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.434041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.434051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.434338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.434347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.434669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.434679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.434995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.435005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.435375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.435384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.435667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.435676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.435972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.435982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.436285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.436295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.436633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.436644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.436866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.436876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.437170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.437180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.437509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.437518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.437821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.437831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.438218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.438228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.438522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.438532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.438918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.438933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.439151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.439161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.439379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.439388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.439761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.439772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.440083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.440093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.440376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.440386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.440672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.440682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.440972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.440983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.441287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.441297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.441574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.441584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.160 [2024-11-04 12:33:48.441912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.160 [2024-11-04 12:33:48.441922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.160 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.442229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.442239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.442558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.442568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.442868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.442879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.443074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.443084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.443402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.443412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.443727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.443737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.444050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.444059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.444367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.444376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.444667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.444676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.444979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.444990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.445352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.445361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.445533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.445543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.445850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.445860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.446024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.446035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.446400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.446409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.446689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.446699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.447004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.447016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.447312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.447322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.447483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.447494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.447865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.447875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.448189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.448198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.448490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.448501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.448852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.448862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.449075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.449085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.449410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.449420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.449723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.449732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.450045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.450055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.450223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.450233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.450609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.450619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.450975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.450986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.451298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.451308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.451611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.451620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.451905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.451915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.452192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.452202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.452510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.452519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.452793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.452803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.453151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.453161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.453467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.161 [2024-11-04 12:33:48.453477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.161 qpair failed and we were unable to recover it. 00:29:14.161 [2024-11-04 12:33:48.453758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.453768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.454054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.454063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.454362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.454371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.454694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.454704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.455023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.455034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.455360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.455371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.455703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.455714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.455994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.456004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.456303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.456313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.456625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.456634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.456922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.456932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.457242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.457252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.457565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.457575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.457806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.457817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.458133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.458142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.458441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.458451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.458765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.458775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.459075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.459085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.459373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.459383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.459684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.459696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.460008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.460018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.460321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.460331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.460619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.460630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.460959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.460970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.461278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.461288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.461589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.461598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.461969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.461979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.462256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.462266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.462549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.462558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.462847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.462857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.463238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.463248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.463555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.463565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.463884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.463894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.464197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.464207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.464535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.464545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.464847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.464857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.465146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.162 [2024-11-04 12:33:48.465155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.162 qpair failed and we were unable to recover it. 00:29:14.162 [2024-11-04 12:33:48.465369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.465379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.465695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.465705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.466006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.466016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.466333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.466343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.466653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.466663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.466965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.466976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.467257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.467267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.467646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.467656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.467982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.467992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.468286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.468299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.468642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.468653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.468964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.468975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.469254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.469264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.469540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.469550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.469857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.469868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.470177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.470186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.470505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.470514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.470798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.470808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.471087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.471098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.471403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.471412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.471599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.471615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.472034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.472044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.472364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.472374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.472679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.472689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.472972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.472982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.473274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.473285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.473584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.473593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.473912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.473922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.474248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.474259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.474589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.474599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.474791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.474802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.475051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.475061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.475342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.163 [2024-11-04 12:33:48.475352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.163 qpair failed and we were unable to recover it. 00:29:14.163 [2024-11-04 12:33:48.475637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.475647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.475918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.475928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.476213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.476222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.476538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.476550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.476839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.476849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.477161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.477171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.477479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.477490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.477714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.477725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.478051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.478063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.478386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.478396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.478736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.478750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.479029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.479039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.479344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.479353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.479661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.479672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.479990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.480001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.480299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.480309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.480595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.480605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.481037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.481052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.481326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.481339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.481655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.481666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.481982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.481993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.482275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.482285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.482585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.482595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.482907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.482918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.483197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.483207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.483511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.483521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.483811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.483822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.484132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.484142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.484441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.484451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.484759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.484770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.485078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.485088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.485378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.485387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.485674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.485684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.485877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.485894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.486169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.486179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.486453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.486463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.486797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.486807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.164 [2024-11-04 12:33:48.487108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.164 [2024-11-04 12:33:48.487117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.164 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.487433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.487443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.487768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.487779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.488078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.488089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.488378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.488388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.488599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.488608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.488912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.488922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.489153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.489163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.489490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.489499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.489807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.489818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.490125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.490134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.490418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.490428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.490772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.490783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.491068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.491078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.491255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.491266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.491602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.491612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.491914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.491925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.492236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.492246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.492399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.492415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.492742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.492755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.493129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.493139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.493421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.493431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.493593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.493604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.493967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.493977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.494364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.494374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.494613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.494622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.495021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.495031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.495325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.495335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.495632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.495642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.495950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.495961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.496289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.496299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.496636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.496647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.497025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.497036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.497288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.497297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.497588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.497600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.497925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.497935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.498112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.498123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.165 [2024-11-04 12:33:48.498453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.165 [2024-11-04 12:33:48.498463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.165 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.498773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.498783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.498970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.498980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.499270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.499280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.499640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.499650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.499951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.499961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.500237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.500247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.500564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.500574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.500964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.500973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.501266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.501276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.501447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.501458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.501796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.501806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.501967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.501977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.502322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.502332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.502487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.502497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.502697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.502707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.503000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.503010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.503301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.503311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.503487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.503497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.503762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.503772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.504171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.504181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.504464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.504473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.504755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.504766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.505082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.505091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.505393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.505405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.505598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.505614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.505967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.505977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.506280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.506290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.506585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.506594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.506900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.506910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.507263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.507272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.507579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.507588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.507904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.507915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.508225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.508236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.508512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.508523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.508748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.508759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.509059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.509069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.166 [2024-11-04 12:33:48.509381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.166 [2024-11-04 12:33:48.509391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.166 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.509678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.509688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.509990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.510000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.510306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.510316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.510611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.510620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.510916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.510926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.511233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.511244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.511542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.511552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.511876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.511887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.512159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.512169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.512490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.512499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.512814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.512824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.513122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.513131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.513424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.513433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.513753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.513763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.514089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.514099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.514340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.514350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.514665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.514675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.514854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.514865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.515156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.515166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.515456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.515465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.515756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.515766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.516087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.516097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.516378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.516388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.516609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.516618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.516899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.516910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.517193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.517203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.517505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.517515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.517857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.517869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.518200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.518210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.518502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.518512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.518766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.518777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.519092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.519102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.167 qpair failed and we were unable to recover it. 00:29:14.167 [2024-11-04 12:33:48.519301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.167 [2024-11-04 12:33:48.519311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.519681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.519691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.519897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.519908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.520197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.520207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.520484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.520493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.520807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.520817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.521143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.521153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.521442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.521452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.521732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.521742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.522051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.522061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.522374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.522384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.522706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.522716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.523001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.523011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.523324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.523334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.523642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.523653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.523944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.523955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.524260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.524269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.524569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.524578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.524894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.524905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.525222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.525231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.525436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.525446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.525650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.525660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.526030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.526042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.526357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.526367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.526743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.526756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.527069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.527080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.527387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.527397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.527709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.527719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.528100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.528111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.528472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.528481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.528861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.528872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.529153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.529163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.529444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.529454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.529758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.529768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.530079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.530090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.530400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.530410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.530744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.530758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.531074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.531085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.531393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.168 [2024-11-04 12:33:48.531403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.168 qpair failed and we were unable to recover it. 00:29:14.168 [2024-11-04 12:33:48.531695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.531704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.531995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.532005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.532322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.532332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.532622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.532632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.532966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.532976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.533309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.533319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.533602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.533613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.533960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.533971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.534297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.534307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.534612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.534621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.534917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.534929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.535245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.535255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.535571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.535581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.535871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.535881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.536186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.536196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.536486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.536496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.536804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.536815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.537154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.537164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.537447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.537457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.537781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.537791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.538073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.538083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.538453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.538463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.538763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.538773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.539043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.539052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.539343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.539353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.539725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.539735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.540046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.540056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.540397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.540407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.540708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.540718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.541039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.541050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.541349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.541359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.541672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.541682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.541996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.542006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.542301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.542311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.542619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.542629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.542978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.542988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.543290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.543300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.543595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.543607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.543919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.169 [2024-11-04 12:33:48.543929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.169 qpair failed and we were unable to recover it. 00:29:14.169 [2024-11-04 12:33:48.544244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.544254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.544566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.544575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.544884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.544894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.545206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.545217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.545514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.545525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.545834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.545845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.546121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.546132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.546452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.546462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.546770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.546780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.547107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.547117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.547484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.547494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.547800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.547810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.548083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.548093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.548407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.548416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.548706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.548716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.549028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.549039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.549354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.549364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.549669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.549679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.550049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.550059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.550361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.550370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.550700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.550710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.550871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.550882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.551202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.551211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.551509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.551519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.551832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.551842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.552163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.552172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.552464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.552475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.552789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.552799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.553098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.553107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.553415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.553426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.553768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.553779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.554083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.554093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.554399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.554409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.554690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.554700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.554991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.555001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.555300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.555309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.555589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.555598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.555900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.555910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.556200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.170 [2024-11-04 12:33:48.556210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.170 qpair failed and we were unable to recover it. 00:29:14.170 [2024-11-04 12:33:48.556409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.556425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.556772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.556782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.557086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.557095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.557382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.557392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.557675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.557685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.557989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.558000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.558314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.558324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.558622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.558631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.558962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.558972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.559276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.559285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.559598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.559609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.559907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.559918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.560234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.560244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.560543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.560553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.560829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.560839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.561141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.561151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.561456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.561466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.561865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.561875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.562153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.562163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.562445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.562455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.562764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.562775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.563078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.563088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.563400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.563410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.563643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.563652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.563968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.563978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.564375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.564385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.564680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.564690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.564967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.564981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.565284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.565294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.565601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.565611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.565919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.565929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.566220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.566230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.566547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.566557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.566865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.566875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.567177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.567187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.567467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.567477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.567851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.567862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.568190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.568201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.568540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.171 [2024-11-04 12:33:48.568549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.171 qpair failed and we were unable to recover it. 00:29:14.171 [2024-11-04 12:33:48.568921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.568932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.569237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.569246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.569529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.569538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.569839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.569849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.570133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.570143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.570450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.570461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.570763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.570774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.571080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.571090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.571389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.571399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.571771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.571782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.572108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.572118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.572306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.572317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.572603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.572613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.572875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.572886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.573157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.573167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.573370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.573382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.573659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.573669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.574041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.574051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.574330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.574340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.574623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.574634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.574992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.575003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.575299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.575310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.575509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.575520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.575885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.575895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.576276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.576286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.576575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.576584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.576898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.576908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.577189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.577199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.577574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.577584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.577938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.577949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.578264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.578273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.578580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.578591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.578886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.578897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.579081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.579092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.172 qpair failed and we were unable to recover it. 00:29:14.172 [2024-11-04 12:33:48.579416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.172 [2024-11-04 12:33:48.579427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.579623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.579634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.579836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.579846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.580069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.580078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.580403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.580413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.580665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.580675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.580915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.580925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.581286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.581295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.581468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.581479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.581811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.581821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.582021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.582031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.582369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.582379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.582551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.582563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.582878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.582888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.583198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.583208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.583534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.583544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.583922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.583932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.584236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.584246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.584418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.584429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.584772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.584782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.585097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.585107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.585410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.585420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.585707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.585717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.586098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.586109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.586272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.586282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.586656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.586666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.586876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.586886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.587156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.587166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.587491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.587501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.587804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.587814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.588139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.588149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.588467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.588478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.588761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.588772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.588973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.588983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.589315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.589325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.589628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.589637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.589924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.589934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.590286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.590296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.590585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.173 [2024-11-04 12:33:48.590595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.173 qpair failed and we were unable to recover it. 00:29:14.173 [2024-11-04 12:33:48.590906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.590916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.591196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.591206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.591514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.591525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.591816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.591826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.592145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.592155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.592485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.592495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.592807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.592817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.593148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.593158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.593468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.593477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.593766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.593777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.594076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.594088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.594371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.594381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.594670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.594680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.594972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.594983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.595294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.595304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.595588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.595599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.595914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.595924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.596208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.596218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.596527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.596537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.596852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.596863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.597144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.597154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.597458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.597467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.597780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.597791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.598097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.598106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.598413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.598423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.598742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.598755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.599064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.599074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.599414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.599424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.599724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.599734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.600061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.600071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.600352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.600362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.600640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.600650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.600956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.600967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.601277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.601288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.601598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.601608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.601897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.601907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.602098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.602109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.602427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.602439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.602751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.174 [2024-11-04 12:33:48.602761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.174 qpair failed and we were unable to recover it. 00:29:14.174 [2024-11-04 12:33:48.603092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.603102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.603400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.603410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.603591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.603601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.603957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.603967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.604252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.604262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.604569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.604579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.604894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.604905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.605220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.605230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.605567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.605577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.605887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.605897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.606199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.606209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.606487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.606497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.606832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.606842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.607118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.607128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.607428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.607438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.607755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.607765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.608057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.608067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.608263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.608273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.608592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.608603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.608945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.608955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.609223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.609232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.609555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.609564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.609845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.609856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.610150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.610160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.610446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.610456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.610750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.610762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.611074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.611085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.611410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.611420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.611696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.611706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.611859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.611869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.612150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.612160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.612472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.612481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.612787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.612798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.613116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.613126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.613406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.613416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.613694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.613705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.614022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.614033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.614328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.614339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.614641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.175 [2024-11-04 12:33:48.614651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.175 qpair failed and we were unable to recover it. 00:29:14.175 [2024-11-04 12:33:48.614973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.614984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.615277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.615287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.615672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.615682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.615973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.615983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.616319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.616329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.616615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.616624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.616917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.616928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.617121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.617131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.617424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.617434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.617807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.617817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.618119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.618129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.618445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.618455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.618764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.618774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.619014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.619024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.619341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.619351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.619660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.619671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.619845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.619856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.620162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.620172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.620490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.620500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.620811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.620821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.621096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.621106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.621403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.621413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.621724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.621734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.622031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.622042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.622340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.622350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.622646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.622655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.622829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.622839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.623196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.623207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.623494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.623504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.623797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.623807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.624111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.624120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.624425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.624435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.624760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.624770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.625076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.176 [2024-11-04 12:33:48.625086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.176 qpair failed and we were unable to recover it. 00:29:14.176 [2024-11-04 12:33:48.625271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.625288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.625632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.625642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.625930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.625941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.626348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.626358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.626689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.626698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.627028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.627038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.627339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.627349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.627552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.627562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.627860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.627870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.628187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.628197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.628516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.628526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.628699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.628710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.629055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.629066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.629406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.629416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.629717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.629727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.630013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.630023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.630332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.630342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.630662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.630671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.630987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.630998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.631285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.631295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.631621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.631634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.631871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.631882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.632214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.632224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.632563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.632573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.632875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.632885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.633192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.633202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.633482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.633491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.633771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.633781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.634091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.634100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.634411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.634420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.634740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.634754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.635129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.635140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.635447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.635458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.635760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.635771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.636079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.636089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.636375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.636385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.636688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.636697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.636858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.636868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.177 [2024-11-04 12:33:48.637246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.177 [2024-11-04 12:33:48.637256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.177 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.637543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.637553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.637865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.637876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.638186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.638196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.638516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.638526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.638856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.638867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.639174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.639184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.639500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.639509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.639808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.639818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.640115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.640127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.640419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.640428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.640755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.640765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.640925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.640936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.641282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.641291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.641592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.641602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.641910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.641920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.642251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.642262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.642590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.642601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.642910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.642921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.643223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.643233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.643538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.643548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.643855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.643866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.644185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.644196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.644509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.644519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.644822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.644832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.645116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.645126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.645434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.645445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.645753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.645763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.646057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.646066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.646397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.646407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.646714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.646724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.647008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.647017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.647294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.647305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.647594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.647604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.647915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.647925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.648112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.648123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.648471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.648481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.648813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.648823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.178 [2024-11-04 12:33:48.649125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.178 [2024-11-04 12:33:48.649135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.178 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.649447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.649456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.649759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.649769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.650068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.650078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.650379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.650388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.650712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.650722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.651020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.651031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.651363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.651373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.651692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.651703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.651996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.652007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.652314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.652323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.652610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.652620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.652904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.652914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.653220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.653230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.653547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.653557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.653852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.653862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.654165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.654176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.654478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.654489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.654677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.654688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.654954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.654964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.655268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.655278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.655562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.655571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.655879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.655889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.656199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.656209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.656515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.656524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.656834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.656844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.657159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.657170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.657467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.657477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.657780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.657790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.658137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.658146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.658448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.658458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.658743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.658756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.659064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.659074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.659386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.659396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.659704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.659714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.660006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.660016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.660324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.660334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.660732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.660742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.661010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.661020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.179 qpair failed and we were unable to recover it. 00:29:14.179 [2024-11-04 12:33:48.661231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.179 [2024-11-04 12:33:48.661243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.661559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.661569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.661870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.661880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.662273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.662283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.662576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.662586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.662890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.662900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.663220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.663230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.663512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.663522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.663805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.663816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.664100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.664109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.664416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.664425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.664739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.664751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.665067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.665077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.665281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.665291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.665593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.665603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.665897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.665908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.666190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.666200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.666539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.666548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.666830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.666841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.667145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.667155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.667506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.667516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.667798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.667808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.668092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.668102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.668421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.668430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.668715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.668724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.669033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.669043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.669358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.669368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.669581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.669593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.669913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.669923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.670224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.670233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.670546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.670555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.670867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.670877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.671078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.671089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.671420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.671430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.671616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.671626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.671921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.671932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.672233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.672242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.672546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.672557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.672857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.180 [2024-11-04 12:33:48.672867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.180 qpair failed and we were unable to recover it. 00:29:14.180 [2024-11-04 12:33:48.673181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.673191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.673533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.673542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.673836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.673846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.674046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.674056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.674329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.674339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.674641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.674651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.674924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.674934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.675231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.675240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.675544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.675553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.675882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.675892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.676134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.676144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.676476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.676486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.676785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.676795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.677164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.677174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.677488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.677498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.677720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.677734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.678045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.678055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.678403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.678413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.678721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.678732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.679029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.679039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.679338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.679348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.679638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.679648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.679925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.679935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.680249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.680258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.680647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.680657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.680861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.680872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.681154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.681164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.681457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.681467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.681744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.681757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.682047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.682058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.682256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.682266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.682457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.682468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.682665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.682675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.682983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.682993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.181 [2024-11-04 12:33:48.683278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.181 [2024-11-04 12:33:48.683288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.181 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.683495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.683504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.683673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.683683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.683821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.683832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.684121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.684131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.684438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.684448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.684760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.684770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.685055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.685066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.685367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.685378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.685711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.685721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.686044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.686055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.686352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.686363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.686683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.686693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.686999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.687009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.687311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.687321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.687612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.687622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.687992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.688002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.688307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.688317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.688611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.688620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.688950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.688961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.689271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.689282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.689593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.689604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.689976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.689987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.690284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.690294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.690586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.690596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.690877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.690887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.691206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.691216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.691397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.691408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.691673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.691683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.691980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.691991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.692221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.692231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.692542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.692551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.692846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.692856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.693031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.693041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.693356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.693365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.693661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.693671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.694041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.694051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.694364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.694374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.694675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.182 [2024-11-04 12:33:48.694686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.182 qpair failed and we were unable to recover it. 00:29:14.182 [2024-11-04 12:33:48.694997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.695008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.695334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.695344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.695651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.695662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.695875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.695887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.696148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.696159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.696404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.696414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.696716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.696726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.697039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.697050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.697340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.697350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.697646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.697656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.697972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.697984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.698267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.698277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.698611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.698622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.698902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.698914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.699231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.699241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.699547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.699557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.699849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.699861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.700084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.700095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.700449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.700459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.700763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.700774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.701080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.701091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.701371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.701381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.701686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.701696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.702008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.702018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.702352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.702363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.702677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.702687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.703000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.703010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.703305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.703315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.703607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.703617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.703806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.703817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.704188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.704198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.704501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.704512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.704849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.704859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.705158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.705168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.705485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.705495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.705806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.705816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.706146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.706155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.183 [2024-11-04 12:33:48.706466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.183 [2024-11-04 12:33:48.706479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.183 qpair failed and we were unable to recover it. 00:29:14.184 [2024-11-04 12:33:48.706783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.184 [2024-11-04 12:33:48.706793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.184 qpair failed and we were unable to recover it. 00:29:14.184 [2024-11-04 12:33:48.707097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.184 [2024-11-04 12:33:48.707107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.184 qpair failed and we were unable to recover it. 00:29:14.184 [2024-11-04 12:33:48.707409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.184 [2024-11-04 12:33:48.707419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.184 qpair failed and we were unable to recover it. 00:29:14.184 [2024-11-04 12:33:48.707727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.184 [2024-11-04 12:33:48.707736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.184 qpair failed and we were unable to recover it. 00:29:14.184 [2024-11-04 12:33:48.708018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.184 [2024-11-04 12:33:48.708028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.184 qpair failed and we were unable to recover it. 00:29:14.184 [2024-11-04 12:33:48.708341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.184 [2024-11-04 12:33:48.708351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.184 qpair failed and we were unable to recover it. 00:29:14.184 [2024-11-04 12:33:48.708643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.184 [2024-11-04 12:33:48.708652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.184 qpair failed and we were unable to recover it. 00:29:14.184 [2024-11-04 12:33:48.708970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.184 [2024-11-04 12:33:48.708980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.184 qpair failed and we were unable to recover it. 00:29:14.184 [2024-11-04 12:33:48.709302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.184 [2024-11-04 12:33:48.709311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.184 qpair failed and we were unable to recover it. 00:29:14.184 [2024-11-04 12:33:48.709619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.184 [2024-11-04 12:33:48.709630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.184 qpair failed and we were unable to recover it. 00:29:14.184 [2024-11-04 12:33:48.709803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.184 [2024-11-04 12:33:48.709814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.184 qpair failed and we were unable to recover it. 00:29:14.184 [2024-11-04 12:33:48.710124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.184 [2024-11-04 12:33:48.710133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.184 qpair failed and we were unable to recover it. 00:29:14.184 [2024-11-04 12:33:48.710439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.184 [2024-11-04 12:33:48.710448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.184 qpair failed and we were unable to recover it. 00:29:14.184 [2024-11-04 12:33:48.710763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.184 [2024-11-04 12:33:48.710773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.184 qpair failed and we were unable to recover it. 00:29:14.184 [2024-11-04 12:33:48.711066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.184 [2024-11-04 12:33:48.711076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.184 qpair failed and we were unable to recover it. 00:29:14.184 [2024-11-04 12:33:48.711385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.184 [2024-11-04 12:33:48.711395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.184 qpair failed and we were unable to recover it. 00:29:14.184 [2024-11-04 12:33:48.711677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.184 [2024-11-04 12:33:48.711687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.184 qpair failed and we were unable to recover it. 00:29:14.184 [2024-11-04 12:33:48.711988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.184 [2024-11-04 12:33:48.711998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.184 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.712339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.712350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.712698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.712709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.713011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.713022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.713361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.713371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.713741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.713756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.714048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.714058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.714334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.714344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.714647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.714656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.714968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.714978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.715305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.715315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.715600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.715610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.715917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.715928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.716204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.716215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.716520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.716529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.716834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.716844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.717141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.717151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.717443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.717452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.717756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.717766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.718048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.718058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.718347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.718357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.718674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.718684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.718990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.719000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.719328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.719338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.719640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.719650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.719949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.719960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.720287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.720296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.461 qpair failed and we were unable to recover it. 00:29:14.461 [2024-11-04 12:33:48.720610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.461 [2024-11-04 12:33:48.720620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.720906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.720916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.721229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.721238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.721543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.721554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.721874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.721885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.722190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.722201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.722531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.722541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.722888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.722898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.723181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.723191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.723492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.723502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.723819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.723829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.724157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.724167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.724459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.724469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.724650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.724661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.724984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.724995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.725372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.725382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.725664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.725674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.725979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.725990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.726328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.726338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.726496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.726507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.726850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.726860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.727168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.727178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.727467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.727477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.727756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.727769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.728068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.728077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.728387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.728396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.728570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.728581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.728919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.728930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.729241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.729251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.729417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.729428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.729826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.729836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.730136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.730146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.730455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.730465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.730750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.730760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.730976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.730987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.731302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.731312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.731525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.731535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.731753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.731763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.462 [2024-11-04 12:33:48.732068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.462 [2024-11-04 12:33:48.732078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.462 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.732373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.732382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.732705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.732714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.733105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.733116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.733409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.733420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.733730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.733741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.733942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.733953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.734160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.734171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.734479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.734489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.734552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.734563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.734846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.734856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.735063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.735073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.735410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.735423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.735582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.735593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.735867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.735878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.736092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.736102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.736451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.736461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.736761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.736772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.736964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.736976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.737349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.737358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.737652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.737662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.737895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.737906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.738199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.738208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.738512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.738521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.738822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.738831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.739014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.739024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.739394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.739403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.739593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.739605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.739927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.739937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.740239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.740249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.740626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.740636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.740942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.740952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.741266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.741276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.741562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.741572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.741922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.741932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.742211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.742220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.742451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.742462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.742776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.742786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.743090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.743100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.463 [2024-11-04 12:33:48.743271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.463 [2024-11-04 12:33:48.743284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.463 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.743650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.743660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.743943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.743954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.744266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.744276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.744588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.744598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.744906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.744916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.745240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.745250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.745557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.745567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.745771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.745782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.746058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.746068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.746361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.746371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.746676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.746686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.747007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.747016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.747183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.747194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.747500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.747510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.747810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.747820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.748137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.748147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.748459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.748468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.748757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.748768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.749096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.749106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.749294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.749303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.749585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.749595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.749904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.749914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.750228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.750237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.750522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.750532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.750842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.750852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.751138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.751148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.751449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.751459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.751774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.751785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.752063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.752073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.752354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.752363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.752677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.752687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.752998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.753008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.753323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.753333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.753717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.753726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.754028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.754038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.754350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.754360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.754551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.754561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.464 [2024-11-04 12:33:48.754857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.464 [2024-11-04 12:33:48.754868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.464 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.755160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.755171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.755515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.755525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.755823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.755834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.756145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.756155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.756468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.756478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.756758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.756768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.757127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.757137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.757303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.757314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.757639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.757649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.757953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.757963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.758147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.758157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.758503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.758513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.758699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.758709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.759000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.759011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.759305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.759315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.759601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.759610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.759927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.759938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.760249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.760260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.760565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.760576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.760909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.760919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.761225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.761235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.761532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.761542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.761856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.761866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.762178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.762188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.762369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.762379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.762519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.762529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.762826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.762837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.763114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.763126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.763308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.763318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.763627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.763639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.763974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.763985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.764296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.465 [2024-11-04 12:33:48.764307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.465 qpair failed and we were unable to recover it. 00:29:14.465 [2024-11-04 12:33:48.764509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.764520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.764810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.764821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.765136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.765147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.765432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.765443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.765765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.765776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.766092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.766102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.766273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.766284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.766619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.766628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.766913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.766923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.767234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.767243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.767552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.767562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.767768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.767778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.768002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.768012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.768222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.768233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.768414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.768424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.768725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.768734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.768946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.768957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.769159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.769169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.769486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.769496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.769887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.769898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.770086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.770097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.770322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.770332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.770637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.770647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.771005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.771015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.771390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.771403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.771592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.771603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.771902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.771912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.772214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.772224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.772541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.772550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.772845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.772855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.773147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.773157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.773466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.773476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.773774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.773785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.774100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.774110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.774406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.774416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.774721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.774731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.774944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.774955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.775197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.775207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.775504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.466 [2024-11-04 12:33:48.775514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.466 qpair failed and we were unable to recover it. 00:29:14.466 [2024-11-04 12:33:48.775939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.775949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.776248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.776258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.776553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.776563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.776863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.776874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.777187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.777198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.777539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.777550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.777878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.777888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.778197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.778207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.778538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.778548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.778894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.778904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.779280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.779290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.779573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.779584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.779918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.779928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.780220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.780230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.780422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.780433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.780730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.780739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.781022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.781032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.781311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.781320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.781634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.781645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.781964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.781975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.782256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.782266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.782552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.782561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.782871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.782882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.783159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.783170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.783474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.783485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.783769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.783780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.784175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.784185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.784544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.784554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.784851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.784862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.785085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.785096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.785391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.785401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.785676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.785686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.786002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.786013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.786301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.786311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.786618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.786628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.787003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.787014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.787327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.787337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.787621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.467 [2024-11-04 12:33:48.787631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.467 qpair failed and we were unable to recover it. 00:29:14.467 [2024-11-04 12:33:48.787921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.787932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.788293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.788304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.788605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.788616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.789011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.789021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.789302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.789313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.789615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.789625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.789851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.789862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.790205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.790214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.790514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.790524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.790914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.790925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.791230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.791240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.791581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.791591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.791822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.791832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.792125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.792136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.792441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.792451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.792739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.792754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.793052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.793062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.793346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.793355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.793668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.793677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.793978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.793991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.794381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.794391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.794672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.794682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.794986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.794997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.795283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.795294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.795636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.795647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.795971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.795982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.796283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.796293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.796584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.796594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.796862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.796873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.797202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.797212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.797520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.797530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.797848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.797859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.798154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.798164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.798473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.798483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.798667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.798680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.798966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.798976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.799265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.799276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.799562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.799572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.799900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.468 [2024-11-04 12:33:48.799910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.468 qpair failed and we were unable to recover it. 00:29:14.468 [2024-11-04 12:33:48.800233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.800243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.800546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.800557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.800874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.800885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.801099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.801112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.801415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.801426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.801769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.801779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.802029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.802039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.802374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.802384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.802575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.802585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.802925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.802935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.803273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.803283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.803661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.803671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.803975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.803985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.804301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.804311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.804629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.804639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.804932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.804943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.805250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.805260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.805568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.805578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.805891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.805902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.806226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.806237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.806622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.806633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.806822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.806832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.807195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.807205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.807471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.807481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.807782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.807792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.808075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.808085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.808400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.808410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.808754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.808764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.809063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.809072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.809378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.809388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.809718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.809731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.810044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.810056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.810403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.810413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.810721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.810732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.811057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.811068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.811382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.811393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.811734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.811745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.812056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.812067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.469 [2024-11-04 12:33:48.812371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.469 [2024-11-04 12:33:48.812382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.469 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.812685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.812695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.813002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.813012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.813297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.813307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.813619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.813628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.813921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.813932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.814254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.814265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.814579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.814589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.814882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.814892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.815076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.815096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.815420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.815431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.815618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.815628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.815932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.815943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.816254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.816263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.816550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.816560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.816843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.816853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.817173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.817183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.817503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.817513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.817799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.817810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.818124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.818134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.818453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.818463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.818768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.818778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.819071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.819080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.819400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.819410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.819735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.819744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.820039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.820048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.820421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.820431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.820736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.820751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.820987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.820998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.821167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.821179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.821544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.821555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.821862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.821874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.822184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.822194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.822509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.822520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.822816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.822827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.470 qpair failed and we were unable to recover it. 00:29:14.470 [2024-11-04 12:33:48.823137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.470 [2024-11-04 12:33:48.823147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.823462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.823473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.823782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.823792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.824086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.824096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.824392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.824401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.824708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.824718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.825003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.825013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.825319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.825329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.825644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.825654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.825925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.825936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.826239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.826249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.826555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.826565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.826876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.826887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.827189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.827202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.827483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.827493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.827790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.827800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.828102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.828112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.828420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.828430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.828820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.828830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.829122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.829132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.829435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.829446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.829756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.829767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.830051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.830062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.830339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.830350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.830668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.830678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.830971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.830983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.831370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.831380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.831678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.831688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.831983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.831993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.832294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.832303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.832699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.832709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.833047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.833057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.833357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.833367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.833677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.833687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.833999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.834009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.834379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.834390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.834578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.834589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.834919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.834930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.835257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.471 [2024-11-04 12:33:48.835267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.471 qpair failed and we were unable to recover it. 00:29:14.471 [2024-11-04 12:33:48.835561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.835572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.835925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.835936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.836242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.836252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.836564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.836574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.836775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.836785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.837106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.837115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.837311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.837322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.837638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.837648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.838005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.838016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.838317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.838326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.838649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.838659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.838979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.838990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.839298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.839310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.839635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.839646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.839963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.839973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.840286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.840296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.840618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.840629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.840928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.840940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.841249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.841259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.841571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.841581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.841865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.841876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.842186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.842198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.842501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.842512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.842817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.842828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.843138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.843148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.843450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.843460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.843856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.843867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.844144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.844154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.844438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.844448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.844830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.844840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.845106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.845117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.845425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.845436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.845631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.845642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.845929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.845939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.846249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.846259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.846548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.846557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.846956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.846967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.847172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.847182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.472 [2024-11-04 12:33:48.847459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.472 [2024-11-04 12:33:48.847468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.472 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.847788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.847798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.848103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.848113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.848419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.848429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.848740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.848754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.849039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.849049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.849342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.849352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.849563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.849572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.849827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.849838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.850137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.850147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.850434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.850445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.850810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.850821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.851134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.851144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.851443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.851453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.851760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.851771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.851989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.851999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.852220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.852230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.852454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.852465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.852632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.852643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.852829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.852841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.853142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.853153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.853490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.853499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.853669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.853679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.854052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.854064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.854368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.854378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.854672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.854682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.854972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.854982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.855307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.855316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.855602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.855613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.855929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.855940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.856283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.856293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.856525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.856535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.856822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.856832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.857133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.857143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.857453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.857463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.857865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.857875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.858263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.858272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.858665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.858675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.858957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.858968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.473 [2024-11-04 12:33:48.859276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.473 [2024-11-04 12:33:48.859286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.473 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.859493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.859503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.859859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.859869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.860157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.860167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.860459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.860471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.860704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.860714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.861002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.861013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.861216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.861226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.861537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.861548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.861861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.861871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.862197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.862206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.862499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.862509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.862729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.862739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.863149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.863159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.863489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.863499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.863804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.863815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.864135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.864145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.864421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.864432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.864716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.864726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.865036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.865047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.865371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.865381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.865592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.865602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.865916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.865927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.866213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.866223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.866538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.866548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.866827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.866837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.867125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.867135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.867419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.867430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.867737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.867751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.868036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.868046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.868337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.868347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.868654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.868666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.868987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.869000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.869283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.869293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.869587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.869597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.474 qpair failed and we were unable to recover it. 00:29:14.474 [2024-11-04 12:33:48.869901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.474 [2024-11-04 12:33:48.869912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.870208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.870218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.870528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.870538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.870830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.870840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.871145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.871155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.871467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.871477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.871793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.871803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.872122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.872132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.872422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.872431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.872754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.872765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.873083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.873094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.873411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.873421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.873697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.873708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.874017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.874028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.874344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.874354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.874661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.874671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.874970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.874980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.875303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.875312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.875608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.875617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.875903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.875914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.876196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.876206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.876484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.876494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.876816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.876827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.877135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.877149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.877542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.877552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.877835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.877846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.878163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.878173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.878463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.878473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.878753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.878764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.879041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.879052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.879358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.879369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.879654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.879665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.879878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.879888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.880217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.880227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.880528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.880538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.880820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.880831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.881173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.881183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.881487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.881497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.881807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.881817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.475 qpair failed and we were unable to recover it. 00:29:14.475 [2024-11-04 12:33:48.882134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.475 [2024-11-04 12:33:48.882144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.882452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.882463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.882769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.882779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.883086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.883096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.883414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.883425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.883707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.883717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.884015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.884026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.884347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.884357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.884673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.884683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.884985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.884995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.885270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.885280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.885590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.885601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.885912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.885923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.886210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.886220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.886535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.886545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.886827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.886838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.887134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.887143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.887475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.887485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.887798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.887809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.888118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.888128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.888463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.888473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.888757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.888768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.889069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.889080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.889268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.889278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.889607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.889616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.889861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.889873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.890181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.890191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.890501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.890510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.890829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.890839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.891140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.891151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.891493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.891503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.891810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.891821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.892105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.892115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.892449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.892458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.892768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.892779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.893090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.893100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.893399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.893409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.893696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.893706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.894022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.476 [2024-11-04 12:33:48.894032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.476 qpair failed and we were unable to recover it. 00:29:14.476 [2024-11-04 12:33:48.894304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.894314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.894628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.894637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.894947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.894957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.895264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.895274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.895588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.895599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.895942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.895952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.896280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.896291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.896566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.896575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.896869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.896879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.897187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.897197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.897482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.897492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.897777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.897787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.898098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.898108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.898424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.898436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.898736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.898751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.899066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.899077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.899386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.899397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.899704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.899714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.899875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.899887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.900087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.900097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.900369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.900378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.900701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.900711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.900999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.901009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.901310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.901320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.901631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.901642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.901950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.901961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.902250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.902260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.902589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.902600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.902910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.902921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.903170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.903180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.903513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.903523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.903807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.903817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.904153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.904162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.904478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.904487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.904757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.904767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.905001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.905017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.905363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.905373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.905683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.477 [2024-11-04 12:33:48.905693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.477 qpair failed and we were unable to recover it. 00:29:14.477 [2024-11-04 12:33:48.906007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.906017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.906299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.906310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.906611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.906623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.906937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.906947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.907250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.907260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.907638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.907648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.907920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.907931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.908247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.908257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.908544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.908554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.908934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.908944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.909320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.909330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.909622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.909632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.909952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.909963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.910275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.910285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.910566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.910577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.910897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.910908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.911229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.911240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.911546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.911556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.911839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.911850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.912134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.912144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.912478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.912488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.912798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.912808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.913186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.913196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.913508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.913519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.913845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.913856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.914156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.914167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.914476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.914487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.914809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.914819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.915169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.915179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.915475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.915485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.915809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.915820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.916095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.916107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.916414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.916424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.916731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.916741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.917055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.917064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.917392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.917402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.917687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.917697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.917943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.917955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.478 [2024-11-04 12:33:48.918261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.478 [2024-11-04 12:33:48.918271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.478 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.918570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.918580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.918741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.918756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.919054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.919064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.919381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.919391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.919698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.919708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.920032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.920042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.920355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.920365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.920675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.920685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.920997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.921008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.921288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.921299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.921520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.921530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.921839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.921849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.922173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.922182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.922480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.922490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.922665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.922675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.923032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.923042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.923203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.923215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.923501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.923511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.923809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.923820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.924112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.924122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.924401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.924411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.924707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.924718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.924992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.925002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.925331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.925341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.925617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.925627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.925922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.925933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.926242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.926252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.926563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.926573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.926877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.926887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.927172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.927182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.927479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.927489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.927847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.927859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.928172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.928182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.928491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.479 [2024-11-04 12:33:48.928502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.479 qpair failed and we were unable to recover it. 00:29:14.479 [2024-11-04 12:33:48.928807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.928817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.929117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.929127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.929428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.929438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.929725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.929735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.930022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.930034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.930332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.930343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.930649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.930660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.930967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.930978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.931280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.931290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.931596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.931606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.931895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.931905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.932225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.932235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.932520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.932530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.932835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.932845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.933156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.933167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.933500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.933511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.933820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.933831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.934119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.934130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.934446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.934456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.934755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.934766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.935031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.935041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.935364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.935374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.935688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.935697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.936008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.936018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.936327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.936339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.936647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.936658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.936971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.936982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.937291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.937301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.937504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.937515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.937824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.937834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.938165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.938175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.938546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.938556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.938856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.938866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.939069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.939079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.939262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.939272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.939472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.939482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.939764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.939774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.940078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.940088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.940399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.480 [2024-11-04 12:33:48.940409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.480 qpair failed and we were unable to recover it. 00:29:14.480 [2024-11-04 12:33:48.940696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.940707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.941113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.941123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.941406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.941416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.941699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.941710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.942027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.942038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.942337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.942347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.942614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.942624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.942925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.942935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.943232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.943242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.943623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.943633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.943917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.943927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.944127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.944137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.944420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.944433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.944753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.944764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.945108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.945119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.945337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.945347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.945669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.945679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.945894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.945904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.946272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.946282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.946485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.946495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.946789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.946808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.947117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.947127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.947321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.947331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.947662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.947672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.947977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.947987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.948294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.948304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.948508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.948518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.948813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.948823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.949124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.949134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.949441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.949451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.949760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.949770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.950079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.950089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.950380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.950389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.950692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.950702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.951064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.951075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.951294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.951304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.951595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.951605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.951798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.951809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.952209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.481 [2024-11-04 12:33:48.952219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.481 qpair failed and we were unable to recover it. 00:29:14.481 [2024-11-04 12:33:48.952510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.952520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.952906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.952917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.953219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.953229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.953532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.953542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.953845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.953856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.954167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.954177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.954458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.954467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.954642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.954652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.955055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.955065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.955351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.955361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.955652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.955662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.955937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.955947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.956264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.956273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.956565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.956575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.956928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.956940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.957224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.957234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.957540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.957551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.957853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.957864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.958145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.958155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.958464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.958474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.958801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.958812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.959175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.959185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.959494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.959503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.959821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.959832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.960143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.960153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.960450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.960461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.960742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.960756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.961068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.961079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.961404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.961415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.961743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.961759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.962110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.962120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.962421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.962430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.962614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.962624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.962963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.962974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.963363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.963373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.963572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.963582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.963749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.963760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146a180 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.482 [2024-11-04 12:33:48.963791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1460ed0 (9): Bad file descriptor 00:29:14.482 [2024-11-04 12:33:48.964441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.482 [2024-11-04 12:33:48.964470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.482 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.964963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.964992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.965299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.965307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.965479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.965487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.966030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.966059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.966438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.966447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.966634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.966642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.966939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.966946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.967254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.967260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.967553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.967560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.967845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.967852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.968168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.968175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.968505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.968513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.968823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.968830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.969150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.969157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.969463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.969470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.969798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.969806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.970107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.970114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.970430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.970437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.970773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.970780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.971111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.971119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.971431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.971438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.971635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.971642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.971952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.971960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.972275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.972282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.972589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.972597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.972788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.972796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.973202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.973208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.973412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.973420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.973768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.973775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.974053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.974061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.974380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.974387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.974706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.974713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.975013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.975021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.975313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.975320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.975641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.483 [2024-11-04 12:33:48.975648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.483 qpair failed and we were unable to recover it. 00:29:14.483 [2024-11-04 12:33:48.975969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.975976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.976191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.976198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.976410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.976417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.976727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.976734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.977057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.977064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.977356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.977363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.977662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.977668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.977876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.977883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.978246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.978253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.978444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.978452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.978776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.978785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.979100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.979107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.979292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.979299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.979652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.979659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.979954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.979961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.980157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.980163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.980490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.980498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.980822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.980829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.981152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.981159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.981485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.981491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.981811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.981818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.982184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.982191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.982521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.982529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.982732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.982740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.983108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.983116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.983425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.983432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.983713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.983720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.984035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.984042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.984353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.984361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.984672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.984680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.984962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.984970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.985205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.985212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.985533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.985539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.985860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.484 [2024-11-04 12:33:48.985867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.484 qpair failed and we were unable to recover it. 00:29:14.484 [2024-11-04 12:33:48.986164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.986173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.986473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.986479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.986772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.986780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.986948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.986957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.987279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.987287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.987610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.987617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.987927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.987934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.988241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.988248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.988439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.988445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.988768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.988777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.989036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.989043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.989262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.989269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.989473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.989479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.989793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.989800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.990107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.990113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.990439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.990447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.990758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.990766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.991066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.991074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.991382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.991389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.991679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.991686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.991992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.991999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.992321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.992327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.992532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.992538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.992792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.992799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.993100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.993107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.993437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.993444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.993764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.993771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.993976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.993984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.994172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.994179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.994433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.994440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.994742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.994751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.995067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.995074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.995264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.995271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.995598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.995605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.995914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.995922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.996230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.996237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.996560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.996568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.996893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.996901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.485 [2024-11-04 12:33:48.997231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.485 [2024-11-04 12:33:48.997238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.485 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:48.997454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:48.997462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:48.997675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:48.997683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:48.998025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:48.998033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:48.998345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:48.998352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:48.998632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:48.998640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:48.998945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:48.998952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:48.999128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:48.999135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:48.999434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:48.999441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:48.999761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:48.999769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.000010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.000018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.000393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.000401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.000691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.000698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.001055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.001062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.001353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.001360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.001666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.001672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.001992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.002000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.002327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.002335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.002636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.002643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.003080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.003087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.003280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.003295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.003676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.003683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.004036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.004043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.004350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.004357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.004679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.004686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.004838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.004846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.005216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.005223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.005509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.005517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.005823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.005831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.006110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.006119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.006430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.006437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.006743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.006755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.007054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.007060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.007382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.007389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.007760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.007769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.008105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.008113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.008274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.008282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.008619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.008626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.008933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.486 [2024-11-04 12:33:49.008941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.486 qpair failed and we were unable to recover it. 00:29:14.486 [2024-11-04 12:33:49.009265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.487 [2024-11-04 12:33:49.009272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.487 qpair failed and we were unable to recover it. 00:29:14.487 [2024-11-04 12:33:49.009593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.487 [2024-11-04 12:33:49.009601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.487 qpair failed and we were unable to recover it. 00:29:14.487 [2024-11-04 12:33:49.009926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.487 [2024-11-04 12:33:49.009933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.487 qpair failed and we were unable to recover it. 00:29:14.487 [2024-11-04 12:33:49.010243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.487 [2024-11-04 12:33:49.010251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.487 qpair failed and we were unable to recover it. 00:29:14.487 [2024-11-04 12:33:49.010556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.487 [2024-11-04 12:33:49.010564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.487 qpair failed and we were unable to recover it. 00:29:14.487 [2024-11-04 12:33:49.010849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.487 [2024-11-04 12:33:49.010857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.487 qpair failed and we were unable to recover it. 00:29:14.487 [2024-11-04 12:33:49.011162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.487 [2024-11-04 12:33:49.011169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.487 qpair failed and we were unable to recover it. 00:29:14.487 [2024-11-04 12:33:49.011461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.487 [2024-11-04 12:33:49.011467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.487 qpair failed and we were unable to recover it. 00:29:14.487 [2024-11-04 12:33:49.011643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.487 [2024-11-04 12:33:49.011650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.487 qpair failed and we were unable to recover it. 00:29:14.487 [2024-11-04 12:33:49.012005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.487 [2024-11-04 12:33:49.012012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.487 qpair failed and we were unable to recover it. 00:29:14.487 [2024-11-04 12:33:49.012314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.487 [2024-11-04 12:33:49.012321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.487 qpair failed and we were unable to recover it. 00:29:14.487 [2024-11-04 12:33:49.012620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.487 [2024-11-04 12:33:49.012627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.487 qpair failed and we were unable to recover it. 00:29:14.487 [2024-11-04 12:33:49.013028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.487 [2024-11-04 12:33:49.013036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.487 qpair failed and we were unable to recover it. 00:29:14.755 [2024-11-04 12:33:49.013326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-11-04 12:33:49.013334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-11-04 12:33:49.013643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-11-04 12:33:49.013651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-11-04 12:33:49.013943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-11-04 12:33:49.013951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-11-04 12:33:49.014254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-11-04 12:33:49.014262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-11-04 12:33:49.014567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-11-04 12:33:49.014575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-11-04 12:33:49.014874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-11-04 12:33:49.014882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-11-04 12:33:49.015828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-11-04 12:33:49.015847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-11-04 12:33:49.016056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-11-04 12:33:49.016064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-11-04 12:33:49.016382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-11-04 12:33:49.016389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-11-04 12:33:49.016704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-11-04 12:33:49.016711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-11-04 12:33:49.017041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-11-04 12:33:49.017049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-11-04 12:33:49.017369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-11-04 12:33:49.017377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-11-04 12:33:49.017690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-11-04 12:33:49.017697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-11-04 12:33:49.017900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-11-04 12:33:49.017907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-11-04 12:33:49.018284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-11-04 12:33:49.018293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-11-04 12:33:49.018611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.755 [2024-11-04 12:33:49.018619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.755 qpair failed and we were unable to recover it. 00:29:14.755 [2024-11-04 12:33:49.018910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.018917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.019190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.019199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.019491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.019497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.019810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.019818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.020030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.020038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.020355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.020363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.020667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.020675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.021000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.021008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.021294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.021301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.021617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.021625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.021927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.021933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.022233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.022241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.022555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.022562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.022882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.022890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.023215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.023223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.023541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.023549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.023869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.023876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.024254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.024261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.024557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.024564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.024873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.024881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.025207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.025215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.025542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.025548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.025859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.025866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.026160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.026167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.026394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.026401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.026679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.026687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.026892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.026900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.027207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.027216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.027396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.027403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.027683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.027691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.027989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.027997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.028304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.028310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.028708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.028715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.029009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.029018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.029318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.029326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.029666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.029672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.756 qpair failed and we were unable to recover it. 00:29:14.756 [2024-11-04 12:33:49.030025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.756 [2024-11-04 12:33:49.030032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.030236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.030243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.030553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.030560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.030810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.030817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.031195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.031202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.031497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.031505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.031810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.031817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.032139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.032146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.032445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.032453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.032768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.032775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.033057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.033064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.033374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.033381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.033698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.033705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.033994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.034001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.034319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.034325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.034497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.034505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.034773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.034780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.035076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.035083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.035406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.035413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.035730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.035737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.036103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.036112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.036394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.036402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.036713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.036721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.037030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.037038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.037328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.037335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.037645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.037652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.038000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.038007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.038284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.038291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.038559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.038566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.038864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.038871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.039186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.039193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.039501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.039508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.039772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.039779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.040100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.040107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.040413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.040420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.040719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.040726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.041015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.041022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.041313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.041319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.757 [2024-11-04 12:33:49.041615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.757 [2024-11-04 12:33:49.041623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.757 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.041921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.041928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.042245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.042252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.042557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.042565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.042863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.042870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.043196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.043203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.043499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.043507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.043846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.043854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.044165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.044171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.044469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.044476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.044797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.044805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.045117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.045123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.045431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.045438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.045762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.045770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.046147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.046155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.046435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.046442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.046758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.046766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.047056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.047064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.047370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.047378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.047683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.047690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.047977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.047984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.048281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.048287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.048596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.048602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.048918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.048926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.049244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.049250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.049575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.049583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.049894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.049903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.050230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.050238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.050544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.050552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.050901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.050909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.051206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.051213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.051519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.051525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.051811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.051818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.052109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.052117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.052430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.052437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.052758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.052765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.758 qpair failed and we were unable to recover it. 00:29:14.758 [2024-11-04 12:33:49.053090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.758 [2024-11-04 12:33:49.053097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.053417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.053424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.053737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.053744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.054034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.054041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.054368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.054375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.054692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.054699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.055009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.055017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.055322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.055329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.055628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.055635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.055883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.055891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.056214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.056221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.056504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.056512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.056811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.056818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.057115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.057122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.057429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.057435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.057738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.057744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.058098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.058106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.058418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.058426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.058722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.058728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.059074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.059081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.059392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.059399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.059564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.059572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.059877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.059885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.060165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.060172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.060462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.060470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.060785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.060792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.061107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.061114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.061506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.061513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.061667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.061675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.061866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.061873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.062167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.062175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.759 [2024-11-04 12:33:49.062472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.759 [2024-11-04 12:33:49.062478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.759 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.062800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.062807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.063126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.063133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.063418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.063425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.063719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.063727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.064031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.064039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.064345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.064353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.064660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.064667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.064947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.064954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.065266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.065273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.065587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.065595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.065911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.065919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.066228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.066235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.066550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.066558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.066859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.066866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.067186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.067193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.067497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.067505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.067794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.067801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.068116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.068124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.068476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.068483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.068773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.068782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.069096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.069103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.069422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.069429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.069754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.069762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.070049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.070056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.070352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.070359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.070686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.070693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.071010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.071018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.071366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.071373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.760 [2024-11-04 12:33:49.071661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.760 [2024-11-04 12:33:49.071669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.760 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.071967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.071974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.072284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.072291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.072602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.072609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.072931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.072938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.073152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.073159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.073450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.073457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.073744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.073754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.074060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.074066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.074487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.074496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.074819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.074826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.075107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.075113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.075425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.075432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.075847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.075854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.076158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.076166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.076476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.076485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.076668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.076675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.077036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.077044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.077357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.077364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.077674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.077681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.077869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.077876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.078144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.078151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.078456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.078463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.078859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.078867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.079146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.079153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.079459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.079467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.079775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.079783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.080090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.080097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.080386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.080393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.080679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.080686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.081006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.081013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.081341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.081349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.081636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.081643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.081962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.081969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.082281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.082288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.082494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.082501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.082799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.082807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.083123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.083130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.083440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.761 [2024-11-04 12:33:49.083446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.761 qpair failed and we were unable to recover it. 00:29:14.761 [2024-11-04 12:33:49.083833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.083840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.084126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.084132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.084439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.084446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.084631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.084638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.085042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.085049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.085332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.085339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.085643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.085650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.085934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.085941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.086251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.086258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.086551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.086559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.086911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.086919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.087224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.087231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.087418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.087425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.087693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.087699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.088040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.088047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.088317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.088324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.088527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.088534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.088751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.088759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.088994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.089001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.089204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.089211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.089487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.089494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.089824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.089831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.090164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.090170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.090486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.090493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.090556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.090563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.090863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.090870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.091205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.091211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.091502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.091510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.091841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.091849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.092157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.092165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.092548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.092555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.092850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.092857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.093181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.762 [2024-11-04 12:33:49.093190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.762 qpair failed and we were unable to recover it. 00:29:14.762 [2024-11-04 12:33:49.093481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.093488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.093554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.093560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.093846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.093853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.094142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.094149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.094459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.094467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.094823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.094830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.095134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.095142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.095464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.095472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.095801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.095808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.095965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.095972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.096258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.096265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.096547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.096553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.096884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.096891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.097183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.097191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.097499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.097506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.097812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.097819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.097989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.097997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.098155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.098163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.098438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.098446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.098743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.098757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.098922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.098929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.099208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.099215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.099538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.099546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.099846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.099853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.100153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.100160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1828539 Killed "${NVMF_APP[@]}" "$@" 00:29:14.763 [2024-11-04 12:33:49.100518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.100526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.100815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.100822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.101118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.101125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 12:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:14.763 [2024-11-04 12:33:49.101440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.101448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 12:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:14.763 [2024-11-04 12:33:49.101755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.101763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 12:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:14.763 [2024-11-04 12:33:49.102067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.102075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 12:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:14.763 [2024-11-04 12:33:49.102225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.102232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 12:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.763 [2024-11-04 12:33:49.102464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.102471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.102877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.102884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.103211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.103219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.763 [2024-11-04 12:33:49.103537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.763 [2024-11-04 12:33:49.103545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.763 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.103864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.103872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.104162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.104169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.104457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.104465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.104784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.104791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.105079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.105087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.105418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.105427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.105629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.105637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.105956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.105964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.106312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.106319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.106651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.106659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.106907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.106916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.107119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.107126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.107477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.107485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.107758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.107767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.108089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.108098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.108395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.108403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.108712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.108720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.109103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.109111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.109428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.109436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.109777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.109785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 12:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1829470 00:29:14.764 [2024-11-04 12:33:49.110186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.110194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 12:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1829470 00:29:14.764 [2024-11-04 12:33:49.110526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.110535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 12:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:14.764 12:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1829470 ']' 00:29:14.764 [2024-11-04 12:33:49.110853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.110862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 12:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.764 12:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:14.764 [2024-11-04 12:33:49.111188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.111196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 12:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.764 [2024-11-04 12:33:49.111438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.111446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 12:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:14.764 [2024-11-04 12:33:49.111762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.111771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.111873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.111882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 12:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.112248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.112257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.112568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.764 [2024-11-04 12:33:49.112576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.764 qpair failed and we were unable to recover it. 00:29:14.764 [2024-11-04 12:33:49.112874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.112882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.113092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.113100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.113293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.113301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.113499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.113507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.113787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.113795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.114120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.114128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.114431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.114439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.114741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.114754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.115050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.115058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.115404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.115411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.115719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.115727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.116035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.116043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.116361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.116369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.116675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.116683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.116897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.116905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.117243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.117251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.117536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.117544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.117860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.117868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.118163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.118171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.118478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.118486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.118798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.118808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.119136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.119144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.119461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.119468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.119776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.119785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.120091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.120099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.120408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.120416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.120743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.120755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.121044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.121052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.121363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.121371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.121578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.121586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.121909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.765 [2024-11-04 12:33:49.121918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.765 qpair failed and we were unable to recover it. 00:29:14.765 [2024-11-04 12:33:49.122124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.122131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.122453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.122462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.122776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.122784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.123112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.123120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.123295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.123303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.123589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.123596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.123811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.123818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.124013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.124020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.124359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.124367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.124536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.124545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.124809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.124817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.125138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.125147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.125467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.125475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.125795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.125803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.126123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.126131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.126308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.126316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.126513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.126521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.126784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.126793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.127050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.127058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.127220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.127228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.127405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.127413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.127737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.127753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.127944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.127951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.128261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.128268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.128590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.128596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.128695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.128702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.128889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.128896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.129198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.129205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.129522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.129530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.129861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.129870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.130192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.130200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.130508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.130515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.130815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.130823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.766 [2024-11-04 12:33:49.131165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.766 [2024-11-04 12:33:49.131172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.766 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.131469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.131476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.131791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.131798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.132123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.132130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.132342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.132349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.132595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.132602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.132802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.132810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.133211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.133218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.133375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.133388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.133714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.133720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.134077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.134084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.134404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.134411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.134626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.134634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.134822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.134830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.135143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.135150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.135452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.135459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.135788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.135796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.135987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.135994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.136293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.136301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.136630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.136638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.136938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.136945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.137136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.137144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.137426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.137434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.137722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.137729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.138029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.138036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.138272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.138279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.138657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.138664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.138875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.138883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.139100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.139107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.139416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.139423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.139661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.139668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.139753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.139763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.140062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.140069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.140387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.140393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.140787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.767 [2024-11-04 12:33:49.140794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.767 qpair failed and we were unable to recover it. 00:29:14.767 [2024-11-04 12:33:49.141010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.141017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.141239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.141248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.141584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.141591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.141905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.141912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.142304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.142311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.142612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.142618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.142919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.142927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.143227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.143234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.143548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.143555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.143864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.143872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.144289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.144297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.144485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.144492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.144712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.144719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.145087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.145094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.145302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.145309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.145601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.145609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.145909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.145917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.146312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.146319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.146592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.146598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.146907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.146915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.147283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.147290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.147620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.147629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.147915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.147924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.148247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.148255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.148457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.148464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.148662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.148670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.149028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.149035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.149421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.149428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.149730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.149737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.150034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.150042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.150457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.150465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.150799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.150806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.768 qpair failed and we were unable to recover it. 00:29:14.768 [2024-11-04 12:33:49.151110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.768 [2024-11-04 12:33:49.151117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.151416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.151423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.151710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.151717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.151937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.151945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.152298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.152306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.152593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.152601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.152910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.152917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.153095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.153103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.153416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.153423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.153725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.153734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.154076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.154084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.154399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.154407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.154617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.154624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.154800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.154814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.155230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.155238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.155541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.155549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.155844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.155851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.156164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.156172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.156567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.156574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.156860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.156868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.157205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.157212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.157551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.157558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.157762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.157770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.157877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.157884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.158197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.158204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.158523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.158531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.158892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.158900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.159086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.159093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.159329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.159336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.159483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.159490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.159650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.159657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.159962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.159970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.160291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.160299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.160614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.160621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.160796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.160804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.161042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.161049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.161341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.161348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.161695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.161702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.161875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.769 [2024-11-04 12:33:49.161882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.769 qpair failed and we were unable to recover it. 00:29:14.769 [2024-11-04 12:33:49.162191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.162198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.162505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.162513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.162706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.162714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.163082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.163090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.163393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.163400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.163743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.163755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.164065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.164073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.164386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.164395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.164713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.164720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.165033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.165040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.165226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.165235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.165609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.165616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.165787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.165795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.165975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.165982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.166185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.166193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.166528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.166535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.166719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.166727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.167012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.167020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.167332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.167339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.167652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.167659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.167983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.167991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.168175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.168183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.168565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.168571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.168756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.168764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.169060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.169068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.169376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.169384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.169695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.169702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.169772] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:29:14.770 [2024-11-04 12:33:49.169816] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.770 [2024-11-04 12:33:49.170026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.170034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.170339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.170347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.170680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.170687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.170844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.170853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.171136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.171145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.171462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.770 [2024-11-04 12:33:49.171471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.770 qpair failed and we were unable to recover it. 00:29:14.770 [2024-11-04 12:33:49.171656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.171664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.171972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.171980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.172151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.172160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.172498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.172507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.172668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.172676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.172996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.173005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.173350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.173359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.173664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.173673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.173990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.174000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.174165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.174174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.174351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.174360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.174673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.174682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.174992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.175001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.175292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.175301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.175615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.175624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.175939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.175949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.176276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.176286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.176622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.176630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.176922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.176931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.177246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.177256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.177616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.177625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.177986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.177995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.178175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.178183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.178522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.178531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.178855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.178864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.179177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.179186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.179495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.179505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.179812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.179821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.180130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.180139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.180447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.180456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.180775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.180784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.181121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.181131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.181301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.181310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.181493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.181502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.181707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.181716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.182012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.182021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.182335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.182344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.771 [2024-11-04 12:33:49.182710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.771 [2024-11-04 12:33:49.182719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.771 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.183037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.183047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.183370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.183379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.183691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.183701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.184030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.184039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.184349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.184357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.184661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.184670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.184981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.184991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.185301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.185309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.185637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.185646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.185934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.185943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.186264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.186275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.186584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.186592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.186771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.186780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.187106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.187114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.187302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.187311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.187625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.187633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.187953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.187961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.188289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.188298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.188607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.188617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.188911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.188919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.189093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.189102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.189380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.189389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.189697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.189705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.190006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.190014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.190341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.190349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.190649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.190657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.190866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.190876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.191192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.191201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.191511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.191519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.191720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.191728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.192047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.192056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.192252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.192260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.192598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.192607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.192915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.192923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.193233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.193242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.193416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.193425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.193734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.193742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.194056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.772 [2024-11-04 12:33:49.194064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.772 qpair failed and we were unable to recover it. 00:29:14.772 [2024-11-04 12:33:49.194364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.194372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.194684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.194692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.194944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.194953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.195328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.195337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.195669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.195677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.195967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.195976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.196258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.196267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.196571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.196579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.196888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.196896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.197211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.197219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.197536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.197544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.197878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.197886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.198236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.198244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.198554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.198562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.198862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.198872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.199181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.199190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.199535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.199543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.199866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.199874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.200214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.200222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.200535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.200543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.200856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.200867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.201189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.201197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.201511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.201519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.201834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.201843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.202163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.202172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.202487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.202495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.202687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.202695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.202903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.202913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.203113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.203121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.203341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.203349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.203696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.203704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.204023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.204032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.204340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.204348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.204675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.204684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.205004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.205013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.205205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.205214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.205505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.205513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.205807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.205815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.773 qpair failed and we were unable to recover it. 00:29:14.773 [2024-11-04 12:33:49.206122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.773 [2024-11-04 12:33:49.206130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.206499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.206507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.206816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.206825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.207209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.207217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.207503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.207512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.207712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.207722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.208042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.208051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.208217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.208226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.208434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.208443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.208708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.208716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.209012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.209021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.209332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.209341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.209498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.209507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.209692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.209700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.209998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.210007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.210208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.210217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.210415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.210424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.210608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.210617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.210919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.210927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.211248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.211256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.211547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.211555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.211866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.211874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.212186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.212196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.212511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.212519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.212869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.212877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.213211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.213220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.213529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.213538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.213814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.213822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.214118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.214127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.214450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.214459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.214620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.214629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.214952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.214960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.215274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.215282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.215577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.215585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.774 [2024-11-04 12:33:49.215906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.774 [2024-11-04 12:33:49.215916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.774 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.216223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.216232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.216546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.216555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.216865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.216873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.217181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.217190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.217494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.217503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.217705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.217714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.217988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.217997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.218388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.218396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.218610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.218619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.218904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.218913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.219129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.219138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.219477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.219485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.219795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.219804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.220142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.220150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.220466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.220474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.220670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.220679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.220967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.220976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.221280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.221288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.221587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.221595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.221917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.221926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.222241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.222250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.222542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.222551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.222856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.222865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.223177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.223185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.223544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.223553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.223863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.223872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.224187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.224195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.224487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.224497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.224804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.224813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.225125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.225133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.225448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.225456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.225638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.225647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.225991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.226000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.226158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.226166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.226482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.226490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.226802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.226811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.227146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.227154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.227468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.775 [2024-11-04 12:33:49.227477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.775 qpair failed and we were unable to recover it. 00:29:14.775 [2024-11-04 12:33:49.227795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.227803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.228139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.228147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.228322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.228331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.228663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.228671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.228970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.228979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.229276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.229284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.229545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.229554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.229764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.229773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.230070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.230077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.230412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.230420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.230751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.230765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.230954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.230963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.231263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.231272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.231561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.231569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.231876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.231884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.232056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.232064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.232363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.232371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.232542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.232551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.232885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.232894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.233213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.233221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.233536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.233544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.233866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.233875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.234179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.234187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.234361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.234370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.234543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.234553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.234738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.234752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.235057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.235065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.235386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.235394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.235693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.235701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.236021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.236031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.236343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.236351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.236668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.236676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.236852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.236861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.237217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.237225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.237523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.237531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.237850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.237858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.238077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.238085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.238426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.776 [2024-11-04 12:33:49.238434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.776 qpair failed and we were unable to recover it. 00:29:14.776 [2024-11-04 12:33:49.238756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.238768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.239071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.239079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.239435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.239444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.239768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.239777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.240100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.240108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.240410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.240418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.240732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.240740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.241032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.241040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.241336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.241344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.241647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.241655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.242009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.242018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.242314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.242322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.242639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.242647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.242918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.242927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.243258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.243267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.243579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.243587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.243758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.243767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.244120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.244128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.244462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.244470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.244780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.244788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.245070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.245078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.245397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.245405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.245571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.245579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.245917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.245926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.246116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.246125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.246418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.246427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.246761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.246771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.246983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.246992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.247274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.247282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.247479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.247488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.247692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.247700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.247992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.248003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.248346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.248355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.248699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.248708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.249017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.249025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.249207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.249215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.249526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.249534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.249877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.249886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.777 [2024-11-04 12:33:49.250221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.777 [2024-11-04 12:33:49.250230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.777 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.250536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.250544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.250830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.250839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.251146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.251154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.251490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.251498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.251679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.251688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.251993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.252001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.252313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.252322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.252634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.252642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.252937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.252945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.253270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.253279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.253586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.253594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.253900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.253908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.254225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.254233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.254582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.254590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.254738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:14.778 [2024-11-04 12:33:49.254886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.254895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.255107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.255116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.255222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.255229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.255513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.255521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.255683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.255692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.255955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.255963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.256270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.256278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.256589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.256597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.256930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.256938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.257196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.257205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.257553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.257561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.257927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.257937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.258243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.258251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.258464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.258472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.258787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.258797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.259211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.259220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.259596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.259605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.259806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.259815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.259959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.259967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.260317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.260325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.778 [2024-11-04 12:33:49.260633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.778 [2024-11-04 12:33:49.260642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.778 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.260893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.260902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.261185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.261194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.261509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.261518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.261853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.261863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.262192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.262201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.262489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.262498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.262836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.262844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.263185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.263194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.263511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.263521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.263816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.263826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.264233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.264243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.264553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.264561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.264866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.264875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.265244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.265252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.265441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.265450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.265866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.265875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.266253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.266262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.266550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.266558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.267036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.267045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.267386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.267394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.267581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.267590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.267920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.267929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.268127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.268135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.268424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.268434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.268780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.268788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.269135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.269143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.269460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.269468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.269662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.269671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.269971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.269980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.270282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.270290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.270586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.270594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.270914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.270922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.271272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.271280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.779 [2024-11-04 12:33:49.271615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.779 [2024-11-04 12:33:49.271624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.779 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.271791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.271800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.272112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.272120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.272311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.272320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.272652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.272663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.272830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.272838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.273121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.273129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.273492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.273500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.273800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.273808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.274135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.274143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.274344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.274352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.274618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.274627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.274838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.274848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.275024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.275033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.275344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.275352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.275682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.275691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.276023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.276032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.276339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.276348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.276671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.276680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.277017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.277025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.277333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.277342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.277672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.277680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.277856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.277866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.278185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.278193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.278483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.278492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.278799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.278808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.279141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.279149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.279437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.279445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.279756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.279766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.280042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.280050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.280381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.280389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.280725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.280733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.281037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.281045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.281202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.281210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.281516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.281524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.281814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.281823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.780 [2024-11-04 12:33:49.282141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.780 [2024-11-04 12:33:49.282149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.780 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.282453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.282461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.282775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.282784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.283046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.283055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.283356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.283364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.283679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.283688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.284017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.284026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.284333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.284342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.284499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.284509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.284807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.284815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.285134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.285143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.285448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.285456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.285755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.285764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.285965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.285973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.286243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.286251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.286579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.286588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.286943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.286952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.287269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.287278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.287608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.287616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.287820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.287830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.288115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.288125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.288461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.288471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.288647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.288655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.289043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.289052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.289350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.289358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.289667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.289675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.289967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.289975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.290264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.290273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.290637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.290645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.290794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.781 [2024-11-04 12:33:49.290820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.781 [2024-11-04 12:33:49.290828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.781 [2024-11-04 12:33:49.290835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.781 [2024-11-04 12:33:49.290840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.781 [2024-11-04 12:33:49.290953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.290962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.291263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.291272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.291581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.291589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.291894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.291902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.292228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.292236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.292420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.781 [2024-11-04 12:33:49.292429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.781 qpair failed and we were unable to recover it. 00:29:14.781 [2024-11-04 12:33:49.292454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:14.781 [2024-11-04 12:33:49.292593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:14.781 [2024-11-04 12:33:49.292740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.292751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 [2024-11-04 12:33:49.292740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.292740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:14.782 [2024-11-04 12:33:49.293070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.293079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.293185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.293193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.293444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.293452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.293790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.293799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.294003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.294012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.294339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.294348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.294550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.294558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.294871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.294881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.295074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.295083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.295383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.295391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.295704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.295712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.295792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.295799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.296079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.296087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.296388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.296396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.296696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.296705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.297089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.297098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.297310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.297318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.297628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.297636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.297816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.297825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.298134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.298142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.298326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.298334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.298613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.298621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.298895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.298906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.299247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.299254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.299583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.299593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.299797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.299805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.300135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.300143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.300339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.300348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.782 [2024-11-04 12:33:49.300532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.782 [2024-11-04 12:33:49.300541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.782 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.300840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.300849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.301182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.301190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.301491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.301499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.301804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.301813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.302155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.302163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.302358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.302366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.302535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.302545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.302941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.303054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e40000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.303333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.303372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e40000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.303586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.303597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.303848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.303858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.304178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.304185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.304453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.304462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.304532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.304541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.304701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.304710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.305023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.305031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.305201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.305209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.305428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.305436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.305742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.305753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.306058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.306067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.306244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.306253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.306525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.306533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.306708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.306717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.307001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.307011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.307205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.307214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.307383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.307391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.307660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.307667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.307838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.307846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.308182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.308189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.308530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.308539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.308868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.308877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.309220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.309228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.309519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.309528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.309808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.309820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.310108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.310116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.310494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.310503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.310683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.310692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.310966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.783 [2024-11-04 12:33:49.310975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.783 qpair failed and we were unable to recover it. 00:29:14.783 [2024-11-04 12:33:49.311171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.784 [2024-11-04 12:33:49.311180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-11-04 12:33:49.311491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.784 [2024-11-04 12:33:49.311500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-11-04 12:33:49.311802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.784 [2024-11-04 12:33:49.311811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-11-04 12:33:49.312166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.784 [2024-11-04 12:33:49.312176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-11-04 12:33:49.312356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.784 [2024-11-04 12:33:49.312366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-11-04 12:33:49.312691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.784 [2024-11-04 12:33:49.312699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-11-04 12:33:49.312989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.784 [2024-11-04 12:33:49.312999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-11-04 12:33:49.313334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.784 [2024-11-04 12:33:49.313343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-11-04 12:33:49.313655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.784 [2024-11-04 12:33:49.313663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-11-04 12:33:49.313953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.784 [2024-11-04 12:33:49.313962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-11-04 12:33:49.314147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.784 [2024-11-04 12:33:49.314155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-11-04 12:33:49.314432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.784 [2024-11-04 12:33:49.314440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-11-04 12:33:49.314630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.784 [2024-11-04 12:33:49.314639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-11-04 12:33:49.314821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.784 [2024-11-04 12:33:49.314830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-11-04 12:33:49.315171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.784 [2024-11-04 12:33:49.315179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-11-04 12:33:49.315467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.784 [2024-11-04 12:33:49.315475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-11-04 12:33:49.315658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.784 [2024-11-04 12:33:49.315667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:14.784 qpair failed and we were unable to recover it. 00:29:15.052 [2024-11-04 12:33:49.315979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-11-04 12:33:49.315989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-11-04 12:33:49.316271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-11-04 12:33:49.316279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-11-04 12:33:49.316443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-11-04 12:33:49.316453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-11-04 12:33:49.316629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-11-04 12:33:49.316636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-11-04 12:33:49.316927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.052 [2024-11-04 12:33:49.316936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.317257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.317266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.317321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.317327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.317608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.317616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.317807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.317815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.318120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.318129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.318326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.318336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.318604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.318612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.318886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.318895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.319190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.319199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.319362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.319371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.319526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.319535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.319836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.319844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.320032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.320041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.320367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.320378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.320688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.320696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.320971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.320980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.321141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.321149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.321457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.321466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.321644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.321652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.321985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.321994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.322277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.322286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.322459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.322467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.322771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.322781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.322966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.322974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.323274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.323281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.323612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.323620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.323960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.323969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.324232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.324241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.324546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.324554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.324846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.324855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.325164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.325173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.325478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.325487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.325773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.325782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.326093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.326101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.326414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.326423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.326758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.326771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.327086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.327096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.327408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.327417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.327705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.327714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.328016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.328027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.328256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.328265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.328436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.328445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.328757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.328767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.328967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.328976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.329154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.329164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.329492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.329501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.329801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.329810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.330138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.053 [2024-11-04 12:33:49.330146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-04 12:33:49.330448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.330457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.330722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.330732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.331033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.331042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.331358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.331367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.331739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.331751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.332031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.332043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.332355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.332364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.332738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.332752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.333042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.333051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.333369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.333378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.333683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.333692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.334001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.334011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.334286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.334294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.334562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.334571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.334736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.334750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.334956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.334964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.335271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.335281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.335609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.335619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.335924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.335934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.336254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.336263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.336580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.336588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.336765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.336774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.337107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.337115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.337406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.337415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.337725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.337733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.338044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.338053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.338337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.338345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.338660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.338668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.338866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.338876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.339217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.339225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.339537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.339546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.339882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.339890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.340163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.340172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.340438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.340445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.340781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.340789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.341129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.341137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.341308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.341317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.341625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.341634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.341913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.341922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.342198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.342206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.342507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.342515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.342848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.342857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.343162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.343170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.343347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.343356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.343531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.343540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.343853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.343863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.344176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.344184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.344512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.344520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.344832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.054 [2024-11-04 12:33:49.344840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.054 qpair failed and we were unable to recover it. 00:29:15.054 [2024-11-04 12:33:49.345160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.345168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.345496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.345504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.345808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.345817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.346132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.346142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.346430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.346439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.346736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.346744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.347103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.347112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.347417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.347425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.347753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.347761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.348075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.348083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.348415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.348423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.348722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.348731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.349003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.349011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.349296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.349305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.349613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.349621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.349936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.349944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.350227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.350235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.350501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.350508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.350840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.350849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.351189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.351198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.351501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.351509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.351773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.351782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.352086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.352094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.352369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.352377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.352689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.352698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.353034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.353044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.353358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.353366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.353671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.353679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.353987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.353995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.354339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.354347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.354653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.354661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.354978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.354987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.355289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.355297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.355604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.355613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.355878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.355887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.356033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.356042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.356376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.356384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.356545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.356555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.356731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.356741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.357019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.357027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.357203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.357211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.357520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.357527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.357831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.357839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.357995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.358003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.358263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.358271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.358579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.358587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.358742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.358756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.359024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.359032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.359209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.359217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.359545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.359553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.359888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.055 [2024-11-04 12:33:49.359897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.055 qpair failed and we were unable to recover it. 00:29:15.055 [2024-11-04 12:33:49.360073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.360081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.360419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.360426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.360602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.360611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.360782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.360789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.361093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.361102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.361420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.361428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.361605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.361614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.361937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.361945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.362212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.362219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.362521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.362529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.362817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.362837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.363111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.363120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.363419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.363430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.363609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.363617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.363892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.363900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.364266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.364274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.364430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.364440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.364765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.364772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.364964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.364973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.365264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.365272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.365464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.365472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.365789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.365797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.365972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.365978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.366284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.366292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.366600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.366608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.366897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.366906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.367062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.367071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.367222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.367230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.367544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.367552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.367639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.367646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.367912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.367921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.368231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.368240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.368534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.368542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.368854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.368862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.369044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.369053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.369338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.369346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.369653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.369661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.369698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.369705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.369984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.369993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.370178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.370188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.370371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.370379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.370692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.370700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.370884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.370893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.371199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.371207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.056 [2024-11-04 12:33:49.371366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.056 [2024-11-04 12:33:49.371374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.056 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.371684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.371692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.372010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.372019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.372193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.372202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.372506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.372514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.372816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.372824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.373181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.373188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.373370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.373378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.373712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.373723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.373990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.373998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.374159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.374168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.374211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.374221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.374386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.374394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.374736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.374744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.374946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.374956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.375269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.375278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.375397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.375406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.375739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.375756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.375925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.375934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.376123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.376131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.376411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.376420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.376593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.376602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.376933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.376941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.377263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.377271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.377632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.377640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.378021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.378029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.378363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.378371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.378449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.378456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.378616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.378624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.378991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.379000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.379164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.379173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.379429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.379436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.379795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.379803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.380061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.380069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.380133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.380140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.380315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.380324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.380626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.380635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.380970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.380978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.381102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.381110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.381391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.381399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.381669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.381677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.381995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.382003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.382046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.382053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.382315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.382324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.382366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.382373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.382694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.382702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.382871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.382879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.383050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.383058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.383345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.383356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.383534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.383543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.383888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.383897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.384204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.384212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.057 [2024-11-04 12:33:49.384384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.057 [2024-11-04 12:33:49.384393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.057 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.384569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.384578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.384866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.384875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.385198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.385207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.385514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.385522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.385827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.385835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.386153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.386161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.386318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.386327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.386633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.386641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.386959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.386968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.387261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.387270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.387590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.387599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.387905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.387914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.387956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.387963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.388119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.388128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.388415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.388423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.388622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.388631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.388926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.388935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.389239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.389247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.389548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.389557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.389881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.389890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.390201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.390209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.390545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.390553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.390855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.390863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.391164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.391173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.391476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.391485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.391754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.391766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.391954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.391962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.392275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.392284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.392615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.392623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.392952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.392961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.393263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.393271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.393571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.393580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.393864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.393872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.394186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.394194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.394494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.394502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.394792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.394802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.395110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.395118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.395447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.395455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.395752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.395765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.396037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.396045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.396353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.396361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.396689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.396697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.396997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.397005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.397188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.397197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.397573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.397582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.397888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.397897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.398208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.398216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.398546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.398553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.398897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.398906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.399227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.058 [2024-11-04 12:33:49.399235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.058 qpair failed and we were unable to recover it. 00:29:15.058 [2024-11-04 12:33:49.399517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.399526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.399854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.399864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.400168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.400177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.400462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.400469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.400762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.400770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.400965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.400973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.401259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.401267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.401585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.401593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.401903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.401912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.402094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.402103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.402414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.402422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.402577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.402585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.402872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.402881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.403192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.403200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.403530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.403537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.403871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.403879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.404154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.404163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.404430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.404439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.404775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.404784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.405112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.405120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.405423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.405431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.405763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.405771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.405941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.405950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.406277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.406285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.406615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.406623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.406794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.406805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.407134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.407142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.407444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.407452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.407766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.407774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.408093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.408102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.408385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.408393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.408703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.408712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.408886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.408894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.409068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.409077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.409359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.409367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.409668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.409676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.409968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.409977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.410241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.410249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.410555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.410563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.410876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.410885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.411216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.411225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.411525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.411533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.411822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.411831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.412148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.059 [2024-11-04 12:33:49.412156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.059 qpair failed and we were unable to recover it. 00:29:15.059 [2024-11-04 12:33:49.412325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.412334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.412670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.412677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.412983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.412991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.413334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.413342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.413637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.413645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.413955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.413963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.414268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.414276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.414614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.414622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.414957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.414965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.415278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.415287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.415620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.415628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.415958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.415966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.416289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.416297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.416638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.416646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.416977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.416985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.417164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.417173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.417380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.417388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.417688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.417695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.418008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.418016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.418357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.418365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.418672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.418680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.418995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.419006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.419338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.419347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.419653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.419662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.419993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.420003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.420311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.420326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.420708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.420720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.421026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.421034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.421312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.421321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.421637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.421645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.421940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.421948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.422236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.422245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.422558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.422565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.422899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.422908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.423220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.423229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.423414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.423421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.423722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.423731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.423929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.423938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.424126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.424136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.424426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.424434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.424754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.424762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.424929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.424938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.425262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.425271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.425425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.425441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.425693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.425701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.425896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.425905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.426190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.426198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.426364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.426373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.426704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.426712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.426865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.426873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.427059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.427066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.427363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.060 [2024-11-04 12:33:49.427371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.060 qpair failed and we were unable to recover it. 00:29:15.060 [2024-11-04 12:33:49.427501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.427508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.427809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.427818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.428129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.428137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.428418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.428426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.428614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.428622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.428794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.428803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.428975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.428983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.429025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.429034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.429320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.429328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.429533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.429544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.429863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.429871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.430181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.430190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.430469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.430477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.430806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.430814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.431117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.431125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.431331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.431339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.431646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.431655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.431877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.431886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.432213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.432222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.432566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.432575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.432761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.432769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.432944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.432953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.433317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.433325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.433626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.433634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.433846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.433855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.434196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.434204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.434506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.434514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.434890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.434898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.435224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.435232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.435481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.435489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.435754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.435766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.435811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.435818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.436012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.436019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.436225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.436234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.436539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.436548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.436711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.436720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.437015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.437024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.437336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.437344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.437526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.437535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.437757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.437765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.437931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.437940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.438150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.438157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.438460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.438468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.438722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.438730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.439041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.439050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.439441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.439450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.439753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.439767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.440089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.440097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.440400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.440407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.440669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.440679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.440833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.440842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.441042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.441049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.061 qpair failed and we were unable to recover it. 00:29:15.061 [2024-11-04 12:33:49.441214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.061 [2024-11-04 12:33:49.441230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.441402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.441410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.441558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.441565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.441606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.441614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.441799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.441807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.442095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.442103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.442367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.442375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.442681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.442690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.443010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.443019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.443328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.443336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.443601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.443609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.443944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.443953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.444104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.444112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.444416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.444424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.444756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.444765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.444941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.444949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.445232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.445239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.445530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.445539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.445744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.445755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.446075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.446083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.446270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.446278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.446436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.446444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.446706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.446714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.447043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.447052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.447361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.447369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.447674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.447682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.447858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.447868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.448157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.448166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.448320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.448327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.448627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.448634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.448911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.448919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.449238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.449247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.449555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.449563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.449709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.449717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.449988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.449997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.450192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.450201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.450376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.450384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.450714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.450724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.451052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.451060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.451232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.451241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.451541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.451549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.451834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.451843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.452156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.452164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.452473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.452480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.452669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.452678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.452956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.452964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.453233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.453241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.453569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.453577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.453877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.453886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.454054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.454062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.454395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.454402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.454707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.062 [2024-11-04 12:33:49.454716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.062 qpair failed and we were unable to recover it. 00:29:15.062 [2024-11-04 12:33:49.455036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.455045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.455376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.455383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.455682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.455689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.456001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.456010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.456336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.456344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.456645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.456653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.456967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.456975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.457304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.457313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.457640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.457649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.457964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.457973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.458309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.458317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.458644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.458652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.458959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.458968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.459169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.459178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.459479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.459488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.459797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.459806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.459996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.460004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.460303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.460311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.460622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.460630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.460961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.460969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.461285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.461293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.461670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.461678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.461970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.461979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.462249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.462256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.462564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.462572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.462932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.462943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.463204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.463212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.463512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.463520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.463806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.463815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.464119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.464128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.464439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.464447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.464779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.464787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.465095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.465104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.465431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.465440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.465724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.465732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.466048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.466056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.466299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.466307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.466589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.466597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.466950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.466958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.467267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.467275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.467558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.467566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.467919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.467928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.468231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.468239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.468437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.468446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.468754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.468763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.469035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.469043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.469334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.063 [2024-11-04 12:33:49.469342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.063 qpair failed and we were unable to recover it. 00:29:15.063 [2024-11-04 12:33:49.469530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.469539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.469856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.469864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.470120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.470128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.470430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.470438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.470752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.470761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.471074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.471082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.471384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.471393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.471680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.471688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.471980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.471988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.472300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.472308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.472607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.472616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.472806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.472815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.473087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.473096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.473396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.473404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.473690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.473698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.474015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.474024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.474334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.474343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.474682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.474690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.474997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.475007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.475283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.475292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.475558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.475567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.475866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.475874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.476181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.476189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.476379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.476388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.476562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.476570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.476905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.476913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.477238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.477247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.477547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.477555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.477875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.477883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.478193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.478201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.478502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.478510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.478880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.478888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.479194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.479202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.479504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.479513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.479865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.479875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.480058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.480067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.480378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.480386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.480693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.480702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.481021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.481029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.481343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.481350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.481621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.481628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.481906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.481915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.482205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.482213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.482384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.482393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.482566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.482577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.482890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.482899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.483225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.483233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.483518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.483527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.483836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.483845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.484148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.484156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.484443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.484451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.484772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.064 [2024-11-04 12:33:49.484781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.064 qpair failed and we were unable to recover it. 00:29:15.064 [2024-11-04 12:33:49.485091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.485098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.485428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.485436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.485605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.485614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.485905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.485913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.486225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.486233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.486564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.486571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.486872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.486882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.487185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.487193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.487493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.487501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.487811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.487820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.488119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.488128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.488297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.488306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.488620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.488629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.488836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.488845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.489178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.489187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.489493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.489501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.489658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.489667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.489834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.489842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.489882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.489889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.490176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.490185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.490519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.490528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.490706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.490715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.491020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.491028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.491291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.491299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.491455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.491463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.491719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.491728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.491947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.491956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.492265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.492275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.492603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.492613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.492790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.492799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.492948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.492955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.493266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.493274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.493605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.493613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.493785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.493794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.494103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.494111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.494395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.494403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.494561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.494569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.494860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.494868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.495057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.495066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.495249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.495256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.495559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.495566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.495850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.495859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.496131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.496139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.496352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.496360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.496667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.496675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.496943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.496952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.497115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.497124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.497275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.497283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.497460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.497468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.497773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.497781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.498107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.065 [2024-11-04 12:33:49.498114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.065 qpair failed and we were unable to recover it. 00:29:15.065 [2024-11-04 12:33:49.498423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.498431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.498739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.498750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.499036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.499044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.499356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.499365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.499620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.499627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.499957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.499966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.500281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.500289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.500592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.500601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.500798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.500807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.501138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.501146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.501336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.501344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.501683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.501691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.502005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.502014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.502184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.502193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.502408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.502416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.502719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.502727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.503022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.503031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.503312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.503320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.503633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.503641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.503956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.503965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.504278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.504286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.504464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.504473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.504580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.504590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.504817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.504825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.505166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.505175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.505456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.505464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.505681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.505690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.505969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.505978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.506131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.506138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.506425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.506432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.506672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.506682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.506872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.506881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.507051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.507060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.507103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.507111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.507407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.507415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.507597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.507607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.507785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.507793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.508101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.508109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.508514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.508522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.508850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.508858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.509056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.509064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.509223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.509230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.509557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.509565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.509744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.066 [2024-11-04 12:33:49.509756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.066 qpair failed and we were unable to recover it. 00:29:15.066 [2024-11-04 12:33:49.510039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.510047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.510377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.510385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.510555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.510564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.510833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.510841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.511158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.511167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.511350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.511358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.511573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.511582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.511842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.511851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.512043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.512051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.512354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.512362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.512692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.512701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.512989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.512997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.513180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.513189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.513470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.513478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.513675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.513683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.513848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.513854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.514009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.514017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.514340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.514347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.514545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.514556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.514711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.514720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.515084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.515092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.515414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.515423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.515621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.515630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.515901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.515910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.516221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.516229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.516515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.516523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.516690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.516698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.516877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.516885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.517045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.517055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.517341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.517348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.517532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.517540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.517735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.517743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.517927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.517937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.518243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.518251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.518495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.518503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.518705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.518713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.519031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.519039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.519081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.519088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.519269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.519277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.519443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.519453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.519721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.519729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.519918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.519925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.520241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.520249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.520570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.520579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.520861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.520870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.521181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.521189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.521524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.521533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.521818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.521827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.522130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.522138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.522451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.522460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.522800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.522808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.523125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.067 [2024-11-04 12:33:49.523133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.067 qpair failed and we were unable to recover it. 00:29:15.067 [2024-11-04 12:33:49.523436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.523445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.523766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.523776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.524079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.524088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.524257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.524266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.524599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.524608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.524912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.524921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.525165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.525176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.525505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.525515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.525815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.525824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.526156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.526164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.526492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.526500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.526807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.526816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.527130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.527138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.527424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.527432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.527756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.527765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.528022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.528030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.528361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.528369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.528702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.528710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.528971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.528980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.529281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.529289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.529619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.529627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.529949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.529958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.530260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.530267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.530579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.530587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.530900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.530908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.531229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.531238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.531538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.531548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.531884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.531893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.532221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.532229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.532532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.532541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.532865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.532874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.533222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.533231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.533543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.533551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.533818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.533826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.534166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.534174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.534481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.534489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.534805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.534813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.535128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.535136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.535446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.535455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.535773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.535783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.536116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.536124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.536464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.536472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.536777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.536786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.537114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.537122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.537420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.537428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.537694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.537702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.538029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.538041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.538354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.538361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.538667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.538675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.539059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.539068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.539223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.068 [2024-11-04 12:33:49.539231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.068 qpair failed and we were unable to recover it. 00:29:15.068 [2024-11-04 12:33:49.539568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.539576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.539865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.539873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.540185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.540193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.540500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.540508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.540825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.540833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.540998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.541005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.541286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.541294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.541637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.541645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.541974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.541983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.542291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.542300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.542580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.542588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.542900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.542908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.543192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.543200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.543528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.543535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.543838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.543847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.544081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.544090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.544341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.544349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.544625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.544633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.544966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.544974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.545264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.545272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.545579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.545587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.545911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.545919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.546104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.546113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.546431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.546439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.546752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.546764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.547079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.547087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.547301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.547309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.547617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.547625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.547915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.547923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.548234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.548241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.548511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.548520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.548862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.548871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.549229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.549238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.549501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.549509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.549841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.549849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.550154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.550164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.550488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.550496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.550828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.550836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.551138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.551145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.551411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.551419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.551757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.551766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.552027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.552036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.552343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.552351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.552662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.552671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.552973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.552981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.553294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.553301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.553583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.069 [2024-11-04 12:33:49.553592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.069 qpair failed and we were unable to recover it. 00:29:15.069 [2024-11-04 12:33:49.553905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.553913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.554221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.554230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.554530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.554538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.554807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.554815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.554992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.555001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.555370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.555379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.555678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.555686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.555968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.555975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.556257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.556265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.556578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.556586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.556916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.556925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.557216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.557224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.557538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.557546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.557824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.557832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.558171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.558179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.558486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.558495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.558843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.558851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.559167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.559176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.559476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.559484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.559835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.559843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.560166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.560174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.560478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.560486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.560743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.560762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.561070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.561078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.561379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.561387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.561653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.561661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.561950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.561958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.562267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.562274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.562574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.562584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.562883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.562892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.563204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.563212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.563512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.563520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.563783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.563792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.564056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.564064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.564378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.564386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.564556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.564565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.564865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.564873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.565173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.565181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.565463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.565472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.565802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.565811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.566077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.566085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.566423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.566431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.566762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.566771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.567056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.567064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.567393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.567402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.567702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.567710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.568024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.568032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.568362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.568370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.568672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.568681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.568948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.568956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.569232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.569239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.569415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.569424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.070 qpair failed and we were unable to recover it. 00:29:15.070 [2024-11-04 12:33:49.569758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.070 [2024-11-04 12:33:49.569767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.569960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.569967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.570262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.570270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.570444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.570453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.570756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.570768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.571068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.571076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.571396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.571403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.571688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.571696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.572015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.572023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.572325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.572333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.572668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.572677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.572980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.572988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.573302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.573311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.573644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.573652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.573922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.573930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.574243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.574250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.574540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.574550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.574853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.574862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.575174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.575182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.575463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.575471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.575703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.575712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.575892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.575901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.576214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.576223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.576535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.576543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.576872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.576880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.577209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.577217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.577515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.577523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.577871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.577879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.578154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.578162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.578450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.578458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.578797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.578806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.579071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.579080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.579380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.579389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.579687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.579695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.580002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.580011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.580334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.580341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.580645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.580653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.580928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.580936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.581106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.581115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.581417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.581424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.581575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.581583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.581891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.581900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.582205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.582213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.582542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.582550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.582852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.582861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.583172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.583180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.583522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.583531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.583868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.583876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.584073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.584081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.584363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.584371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.584670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.584679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.584982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.071 [2024-11-04 12:33:49.584990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.071 qpair failed and we were unable to recover it. 00:29:15.071 [2024-11-04 12:33:49.585320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.585329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.585631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.585640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.585949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.585957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.586284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.586292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.586598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.586608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.586797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.586806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.587121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.587129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.587305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.587314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.587640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.587648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.587817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.587826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.588129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.588137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.588293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.588302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.588586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.588593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.588858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.588867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.589041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.589050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.589205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.589213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.589467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.589475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.589636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.589645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.589799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.589807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.590064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.590071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.590370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.590378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.590706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.590715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.591013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.591022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.591334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.591343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.591664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.591672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.591851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.591859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.592212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.592220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.592513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.592521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.592837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.592846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.593025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.593034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.593207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.593214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.593519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.593527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.593856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.593864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.594156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.594164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.594462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.594470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.594619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.594626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.594926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.594934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.595248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.595256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.595431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.595440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.595755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.595764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.596049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.596057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.596372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.596379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.596663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.596671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.596987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.596995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.597300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.597311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.597604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.597612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.597804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.597812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.598124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.598132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.598501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.598509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.598662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.598670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.598981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.598989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.599273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.072 [2024-11-04 12:33:49.599281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.072 qpair failed and we were unable to recover it. 00:29:15.072 [2024-11-04 12:33:49.599548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.599557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.599871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.599879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.600054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.600063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.600355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.600363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.600671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.600679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.600934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.600942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.601125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.601134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.601311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.601319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.601617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.601625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.601909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.601917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.602218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.602226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.602388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.602396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.602675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.602683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.602849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.602859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.603009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.603017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.603205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.603214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.603488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.603497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.603663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.603672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.603990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.603998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.604212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.604220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.604513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.604521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.604820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.604828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.605001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.605010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.605208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.605216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.605535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.605543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.605814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.605823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.606153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.606161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.606341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.606350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.606540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.606549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.606593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.606601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.606913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.606922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.607232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.607240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.607572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.607581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.607760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.607768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.607937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.607945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.608198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.608208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.608386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.608395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.608698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.608706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.609004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.609012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.609188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.609197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.609507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.609515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.609765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.609774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.610076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.610084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.610410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.610418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.610569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.610586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.610885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.610893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.073 qpair failed and we were unable to recover it. 00:29:15.073 [2024-11-04 12:33:49.611177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.073 [2024-11-04 12:33:49.611185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.074 qpair failed and we were unable to recover it. 00:29:15.074 [2024-11-04 12:33:49.611507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.074 [2024-11-04 12:33:49.611515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.074 qpair failed and we were unable to recover it. 00:29:15.074 [2024-11-04 12:33:49.611715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.074 [2024-11-04 12:33:49.611724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.074 qpair failed and we were unable to recover it. 00:29:15.074 [2024-11-04 12:33:49.612055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.074 [2024-11-04 12:33:49.612063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.074 qpair failed and we were unable to recover it. 00:29:15.074 [2024-11-04 12:33:49.612253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.074 [2024-11-04 12:33:49.612262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.074 qpair failed and we were unable to recover it. 00:29:15.348 [2024-11-04 12:33:49.612550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.348 [2024-11-04 12:33:49.612561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.348 qpair failed and we were unable to recover it. 00:29:15.348 [2024-11-04 12:33:49.612874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.348 [2024-11-04 12:33:49.612883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.348 qpair failed and we were unable to recover it. 00:29:15.348 [2024-11-04 12:33:49.613043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.348 [2024-11-04 12:33:49.613052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.348 qpair failed and we were unable to recover it. 00:29:15.348 [2024-11-04 12:33:49.613229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.348 [2024-11-04 12:33:49.613237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.348 qpair failed and we were unable to recover it. 00:29:15.348 [2024-11-04 12:33:49.613512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.348 [2024-11-04 12:33:49.613520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.348 qpair failed and we were unable to recover it. 00:29:15.348 [2024-11-04 12:33:49.613677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.348 [2024-11-04 12:33:49.613685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.348 qpair failed and we were unable to recover it. 00:29:15.348 [2024-11-04 12:33:49.613851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.348 [2024-11-04 12:33:49.613859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.348 qpair failed and we were unable to recover it. 00:29:15.348 [2024-11-04 12:33:49.614052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.348 [2024-11-04 12:33:49.614060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.348 qpair failed and we were unable to recover it. 00:29:15.348 [2024-11-04 12:33:49.614247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.348 [2024-11-04 12:33:49.614255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.348 qpair failed and we were unable to recover it. 00:29:15.348 [2024-11-04 12:33:49.614555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.348 [2024-11-04 12:33:49.614563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.348 qpair failed and we were unable to recover it. 00:29:15.348 [2024-11-04 12:33:49.614877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.348 [2024-11-04 12:33:49.614887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.348 qpair failed and we were unable to recover it. 00:29:15.348 [2024-11-04 12:33:49.615212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.348 [2024-11-04 12:33:49.615220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.348 qpair failed and we were unable to recover it. 00:29:15.348 [2024-11-04 12:33:49.615402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.348 [2024-11-04 12:33:49.615410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.348 qpair failed and we were unable to recover it. 00:29:15.348 [2024-11-04 12:33:49.615714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.348 [2024-11-04 12:33:49.615722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.348 qpair failed and we were unable to recover it. 00:29:15.348 [2024-11-04 12:33:49.615924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.348 [2024-11-04 12:33:49.615933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.348 qpair failed and we were unable to recover it. 00:29:15.348 [2024-11-04 12:33:49.616116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.348 [2024-11-04 12:33:49.616124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.348 qpair failed and we were unable to recover it. 00:29:15.348 [2024-11-04 12:33:49.616431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.348 [2024-11-04 12:33:49.616439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.616619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.616628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.616943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.616951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.617253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.617261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.617568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.617576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.617851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.617861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.618161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.618169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.618479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.618487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.618659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.618668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.618982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.618990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.619293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.619301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.619585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.619594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.619879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.619888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.620262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.620270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.620555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.620562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.620863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.620871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.621182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.621189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.621519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.621528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.621894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.621902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.622207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.622215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.622503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.622511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.622789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.622797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.623096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.623104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.623402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.623410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.623725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.623734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.624032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.624041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.624320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.624329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.624595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.624604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.624905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.624914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.625248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.625257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.625567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.625576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.625835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.625843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.626132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.626140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.626448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.626456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.626756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.626765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.627070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.627078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.627444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.349 [2024-11-04 12:33:49.627452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.349 qpair failed and we were unable to recover it. 00:29:15.349 [2024-11-04 12:33:49.627756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.627769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.627963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.627971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.628158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.628166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.628432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.628441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.628807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.628814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.629127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.629135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.629440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.629448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.629784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.629792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.630111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.630122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.630417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.630425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.630763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.630771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.631037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.631045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.631428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.631437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.631777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.631785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.632083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.632091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.632410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.632418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.632701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.632709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.632970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.632979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.633288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.633296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.633578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.633586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.633890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.633898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.634203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.634211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.634502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.634510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.634825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.634833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.635134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.635142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.635446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.635454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.635770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.635779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.636091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.636099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.636427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.636435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.636763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.636771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.637080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.637088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.637265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.637273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.637559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.637567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.637739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.637750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.638058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.638065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.638372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.638380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.638661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.638669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.350 qpair failed and we were unable to recover it. 00:29:15.350 [2024-11-04 12:33:49.638999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.350 [2024-11-04 12:33:49.639008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.639325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.639333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.639631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.639639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.639826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.639836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.640049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.640057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.640233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.640242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.640582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.640591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.640906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.640914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.641262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.641270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.641601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.641608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.641797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.641806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.642107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.642114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.642436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.642443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.642742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.642754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.643048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.643056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.643345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.643352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.643663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.643671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.643988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.643997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.644290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.644298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.644636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.644645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.644907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.644916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.645251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.645260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.645563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.645571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.645885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.645893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.646176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.646184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.646513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.646521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.646822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.646830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.647158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.647166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.647464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.647472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.647782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.647790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.648098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.648106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.648414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.648422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.648734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.648743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.649073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.649081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.649390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.649399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.649701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.649710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.650004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.351 [2024-11-04 12:33:49.650012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.351 qpair failed and we were unable to recover it. 00:29:15.351 [2024-11-04 12:33:49.650192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.650201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.650484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.650494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.650675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.650684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.651012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.651020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.651284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.651292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.651472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.651481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.651809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.651818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.652129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.652137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.652468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.652476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.652783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.652791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.652990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.652998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.653159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.653166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.653339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.653346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.653650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.653658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.653771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.653780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.653958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.653966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.654285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.654293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.654591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.654598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.654954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.654962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.655142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.655151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.655487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.655496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.655794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.655803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.655999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.656007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.656331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.656339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.656605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.656613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.656764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.656772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.657076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.657083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.657259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.657268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.657442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.657450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.657709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.657716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.658084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.658092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.658412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.658420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.658704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.658712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.658883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.658892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.659076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.659085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.659275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.659283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.659602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.659609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.659771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.659780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.352 qpair failed and we were unable to recover it. 00:29:15.352 [2024-11-04 12:33:49.659969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.352 [2024-11-04 12:33:49.659977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.660252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.660260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.660558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.660566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.660761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.660771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.661070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.661079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.661393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.661401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.661582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.661590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.661891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.661899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.662203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.662211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.662399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.662407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.662752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.662760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.663065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.663073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.663361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.663370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.663661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.663670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.663842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.663850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.664040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.664048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.664089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.664098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.664373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.664381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.664680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.664687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.665001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.665009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.665272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.665280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.665581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.665589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.665875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.665884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.666163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.666171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.666333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.666342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.666638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.666646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.666926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.666935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.667238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.667247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.667530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.667539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.667877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.667885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.668026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.668033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.668340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.668348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.353 [2024-11-04 12:33:49.668650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.353 [2024-11-04 12:33:49.668659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.353 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.668940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.668948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.669119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.669128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.669488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.669496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.669764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.669772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.669926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.669934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.670099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.670106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.670294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.670302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.670467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.670475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.670772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.670780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.671091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.671099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.671262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.671272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.671454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.671462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.671790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.671798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.672093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.672101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.672275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.672284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.672544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.672552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.672868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.672877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.673175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.673183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.673494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.673504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.673805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.673814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.673991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.673998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.674330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.674338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.674669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.674677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.674985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.674993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.675293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.675301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.675465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.675473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.675783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.675792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.676104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.676112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.676416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.676424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.676757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.676766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.676919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.676928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.677082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.677090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.677353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.677362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.677536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.677543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.677833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.677842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.678161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.678169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.354 [2024-11-04 12:33:49.678352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.354 [2024-11-04 12:33:49.678361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.354 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.678402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.678410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.678741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.678752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.679025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.679033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.679366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.679374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.679590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.679598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.679642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.679648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.679922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.679930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.680138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.680146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.680420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.680427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.680591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.680600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.680807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.680816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.681100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.681109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.681475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.681483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.681703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.681713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.682047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.682055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.682368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.682376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.682567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.682575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.682903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.682912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.683224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.683231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.683538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.683547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.683865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.683874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.684191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.684199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.684384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.684392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.684711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.684719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.684889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.684898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.685094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.685102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.685295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.685303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.685617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.685626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.685809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.685818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.686134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.686142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.686321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.686331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.686620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.686628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.686703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.686709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.687051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.687059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.687239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.687248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.687575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.687582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.687762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.687771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.688101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.355 [2024-11-04 12:33:49.688109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.355 qpair failed and we were unable to recover it. 00:29:15.355 [2024-11-04 12:33:49.688467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.688477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.688762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.688771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.689158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.689166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.689469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.689477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.689765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.689773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.690060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.690067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.690233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.690242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.690524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.690532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.690850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.690858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.691226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.691234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.691413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.691422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.691465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.691472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.691788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.691796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.691965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.691974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.692146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.692153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.692331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.692342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.692502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.692511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.692848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.692856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.693114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.693122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.693297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.693306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.693491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.693499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.693792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.693801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.693974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.693983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.694323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.694331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.694635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.694643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.694865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.694873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.695119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.695127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.695309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.695317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.695603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.695611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.695784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.695792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.696105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.696113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.696413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.696423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.696706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.696715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.697028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.697036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.697341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.697349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.697636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.697644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.356 [2024-11-04 12:33:49.697682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.356 [2024-11-04 12:33:49.697690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.356 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.697990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.698000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.698263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.698272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.698312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.698321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.698489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.698497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.698821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.698829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.699009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.699018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.699346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.699353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.699681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.699689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.699994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.700002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.700293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.700301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.700564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.700572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.700913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.700922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.701270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.701277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.701596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.701604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.701777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.701786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.701972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.701980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.702295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.702303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.702483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.702492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.702819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.702830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.703101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.703108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.703413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.703421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.703713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.703721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.704027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.704035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.704344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.704352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.704523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.704532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.704841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.704850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.705026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.705035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.705196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.705205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.705508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.705516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.705799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.705807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.706144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.706152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.706452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.706460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.706624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.357 [2024-11-04 12:33:49.706632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.357 qpair failed and we were unable to recover it. 00:29:15.357 [2024-11-04 12:33:49.706899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.706907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.707196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.707204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.707504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.707512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.707798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.707807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.707989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.707996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.708163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.708172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.708455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.708463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.708616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.708625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.708905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.708913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.709231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.709239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.709400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.709408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.709712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.709720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.710050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.710058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.710215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.710224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.710525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.710533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.710823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.710831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.711141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.711149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.711450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.711458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.711771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.711779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.712086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.712094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.712396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.712405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.712710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.712719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.712990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.713000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.713318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.713327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.713540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.713548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.713848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.713857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.714180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.714189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.714469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.714477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.714794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.714802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.715107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.715115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.715427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.715436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.715722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.715731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.716028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.716037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.716316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.716325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.716637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.716644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.716975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.716984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.358 [2024-11-04 12:33:49.717199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.358 [2024-11-04 12:33:49.717207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.358 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.717508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.717515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.717827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.717835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.718154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.718162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.718491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.718500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.718808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.718816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.719112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.719122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.719417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.719426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.719735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.719743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.719916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.719924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.720241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.720249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.720592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.720601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.720889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.720897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.721210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.721218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.721538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.721547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.721878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.721887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.722114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.722121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.722453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.722462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.722741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.722753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.723025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.723032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.723342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.723350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.723688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.723697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.724002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.724011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.724309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.724318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.724600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.724609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.724913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.724922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.725243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.725252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.725581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.725589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.725891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.725899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.726190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.726199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.726533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.726541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.726874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.726882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.727207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.727215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.727544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.727552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.727835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.727843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.728149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.728158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.728467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.728475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.728806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.728815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.359 [2024-11-04 12:33:49.729145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.359 [2024-11-04 12:33:49.729154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.359 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.729448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.729456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.729758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.729766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.730109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.730117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.730304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.730312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.730630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.730638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.730929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.730938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.731219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.731227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.731539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.731547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.731716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.731725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.732058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.732067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.732376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.732384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.732688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.732697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.733020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.733028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.733337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.733345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.733647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.733656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.733947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.733955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.734218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.734227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.734527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.734535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.734820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.734829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.735139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.735147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.735482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.735490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.735772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.735780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.736085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.736093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.736402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.736410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.736738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.736749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.737023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.737031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.737297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.737305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.737640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.737648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.737978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.737987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.738295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.738303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.738585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.738595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.738894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.738903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.739204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.739212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.739542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.739550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.739851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.739859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.740201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.740210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.740496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.740504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.740778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.360 [2024-11-04 12:33:49.740786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.360 qpair failed and we were unable to recover it. 00:29:15.360 [2024-11-04 12:33:49.741095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.741103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.741367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.741374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.741644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.741651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.741938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.741947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.742257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.742265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.742564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.742573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.742884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.742892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.743054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.743063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.743101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.743110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.743421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.743429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.743739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.743751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.744024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.744032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.744221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.744229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.744555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.744564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.744847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.744855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.745013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.745022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.745306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.745313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.745468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.745477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.745786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.745794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.746104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.746112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.746405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.746413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.746682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.746691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.746992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.747000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.747187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.747195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.747494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.747502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.747764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.747774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.747948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.747956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.748261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.748269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.748331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.748338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.748611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.748619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.748904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.748912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.749072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.749081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.749233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.749242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.749584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.749592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.749904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.749912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.750090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.361 [2024-11-04 12:33:49.750099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.361 qpair failed and we were unable to recover it. 00:29:15.361 [2024-11-04 12:33:49.750439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.750447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.750603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.750612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.750878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.750886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.751188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.751196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.751354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.751363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.751489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.751497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.751894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.751990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e40000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.752400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.752438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e40000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.752637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.752668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e40000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.752861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.752869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.753181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.753189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.753365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.753374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.753414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.753422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.753578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.753585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.753878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.753886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.754183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.754191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.754503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.754510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.754848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.754856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.755228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.755236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.755562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.755570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.755902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.755910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.756211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.756219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.756376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.756385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.756566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.756574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.756799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.756808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.757137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.757145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.757323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.757332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.757671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.757678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.757939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.757947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.758251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.758260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.758588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.758595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.758898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.758906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.759232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.759240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.759420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.362 [2024-11-04 12:33:49.759428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.362 qpair failed and we were unable to recover it. 00:29:15.362 [2024-11-04 12:33:49.759634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.759642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.759907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.759916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.760085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.760095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.760352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.760359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.760644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.760652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.760964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.760972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.761264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.761272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.761549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.761558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.761767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.761776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.762115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.762122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.762165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.762172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.762490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.762498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.762800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.762808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.763142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.763150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.763451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.763459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.763636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.763645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.763830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.763838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.764033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.764041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.764207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.764215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.764575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.764583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.764766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.764775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.764967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.764975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.765301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.765309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.765492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.765501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.765760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.765769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.765929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.765936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.766244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.766252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.766568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.766576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.766882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.766890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.767229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.767237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.767544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.767552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.767854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.767862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.768148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.768156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.768336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.768345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.768644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.768652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.768959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.768967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.769271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.363 [2024-11-04 12:33:49.769279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.363 qpair failed and we were unable to recover it. 00:29:15.363 [2024-11-04 12:33:49.769459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.769468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.769774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.769783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.769965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.769973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.770279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.770287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.770631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.770639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.770973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.770983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.771165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.771174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.771487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.771495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.771701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.771709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.771982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.771991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.772195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.772203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.772547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.772555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.772769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.772777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.773062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.773070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.773385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.773393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.773578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.773586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.773900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.773908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.773952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.773958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.774262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.774270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.774605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.774613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.774911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.774919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.775086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.775095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.775403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.775411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.775678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.775686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.775942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.775951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.776176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.776185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.776502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.776511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.776813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.776821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.777161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.777169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.777468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.777476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.777752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.777761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.778073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.778081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.778354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.778363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.778540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.364 [2024-11-04 12:33:49.778550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.364 qpair failed and we were unable to recover it. 00:29:15.364 [2024-11-04 12:33:49.778879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.778888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.779150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.779158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.779485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.779494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.779784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.779793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.780101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.780109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.780411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.780420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.780723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.780732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.781080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.781090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.781261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.781270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.781572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.781580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.781921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.781930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.782087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.782098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.782404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.782412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.782732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.782740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.783062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.783071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.783432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.783440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.783776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.783785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.784124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.784132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.784308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.784317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.784635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.784643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.784968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.784976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.785269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.785277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.785580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.785588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.785888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.785897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.786231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.786238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.786539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.786547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.786813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.786821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.787125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.787133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.787416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.787425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.787756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.787765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.788052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.788060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.788367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.788374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.788563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.788571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.788833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.788841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.789130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.789138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.789435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.789443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.365 [2024-11-04 12:33:49.789607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-04 12:33:49.789615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.365 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.789914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.789922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.790222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.790232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.790564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.790572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.790875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.790884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.791190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.791198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.791537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.791545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.791872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.791880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.792184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.792193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.792476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.792484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.792784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.792793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.793105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.793114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.793394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.793402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.793716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.793725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.794046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.794054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.794339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.794346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.794656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.794664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.794853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.794862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.795148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.795156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.795422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.795430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.795745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.795757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.796086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.796095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.796365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.796374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.796673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.796681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.796979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.796988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.797323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.797331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.797595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.797603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.797878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.797887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.798200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.798209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.798382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.798391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.798676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.798685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.798993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.799001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.799188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.799196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.799379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.799388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.799594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.799603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.799887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-04 12:33:49.799895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.366 qpair failed and we were unable to recover it. 00:29:15.366 [2024-11-04 12:33:49.800076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.800084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.800395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.800403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.800707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.800715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.801055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.801064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.801329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.801337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.801637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.801646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.801952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.801962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.802263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.802271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.802589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.802597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.802915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.802925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.803234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.803242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.803508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.803516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.803859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.803867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.804218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.804226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.804494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.804502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.804827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.804836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.805147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.805155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.805422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.805431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.805761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.805770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.806094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.806102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.806409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.806417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.806700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.806708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.807021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.807031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.807338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.807347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.807676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.807684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.807974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.807982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.808170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.808178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.808475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.808483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.808670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.808679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.808984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.808994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.809275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.809284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.367 [2024-11-04 12:33:49.809593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-04 12:33:49.809602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.367 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.809849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.809857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.810145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.810154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.810467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.810476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.810779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.810787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.811097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.811105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.811396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.811405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.811708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.811716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.812019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.812028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.812325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.812333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.812637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.812646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.812936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.812944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.813274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.813282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.813519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.813527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.813835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.813843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.814159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.814168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.814467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.814475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.814653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.814662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.814992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.815000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.815323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.815331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.815661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.815669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.815990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.815998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.816327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.816336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.816632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.816640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.816907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.816916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.817095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.817103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.817417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.817425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.817691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.817699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.817851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.817860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.818041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.818048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.818382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.818390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.818718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.818727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.819009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.819017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.819326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.819335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.819641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.819650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.819946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.819955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.820270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.820279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.820580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.820588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.368 qpair failed and we were unable to recover it. 00:29:15.368 [2024-11-04 12:33:49.820887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.368 [2024-11-04 12:33:49.820896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.821171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.821179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.821492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.821501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.821782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.821791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.822094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.822102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.822414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.822422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.822752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.822762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.823091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.823099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.823416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.823424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.823725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.823734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.824017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.824025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.824336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.824344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.824615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.824625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.824918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.824927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.825236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.825245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.825563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.825572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.825892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.825903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.826213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.826223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.826511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.826519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.826706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.826714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.827025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.827034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.827322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.827330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.827666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.827674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.827989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.827997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.828165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.828174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.828484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.828492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.828828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.828837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.829183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.829191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.829497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.829505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.829661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.829670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.829984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.829992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.830314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.830322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.830661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.830669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.830845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.830854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.831195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.831202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.831512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.369 [2024-11-04 12:33:49.831520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.369 qpair failed and we were unable to recover it. 00:29:15.369 [2024-11-04 12:33:49.831857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.831865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.832083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.832091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.832395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.832403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.832715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.832723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.833020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.833029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.833344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.833353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.833681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.833689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.833998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.834006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.834269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.834277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.834576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.834584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.834885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.834893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.835166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.835174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.835345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.835354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.835524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.835532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.835845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.835853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.836138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.836146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.836457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.836466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.836764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.836773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.837115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.837123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.837425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.837433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.837743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.837755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.838080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.838091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.838428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.838437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.838741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.838752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.839060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.839067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.839222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.839231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.839546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.839554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.839886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.839894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.840204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.840212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.840512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.840520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.840825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.840834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.841139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.841147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.841306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.841315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.841475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.841483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.370 [2024-11-04 12:33:49.841781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.370 [2024-11-04 12:33:49.841789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.370 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.842125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.842133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.842416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.842425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.842723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.842732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.843039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.843048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.843211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.843220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.843377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.843385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.843690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.843699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.844043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.844051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.844384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.844393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.844704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.844713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.845023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.845032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.845209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.845218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.845390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.845399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.845708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.845716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.846015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.846023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.846333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.846341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.846516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.846524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.846684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.846692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.846971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.846979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.847263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.847272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.847586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.847595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.847778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.847787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.847826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.847834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.847997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.848005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.848282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.848290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.848560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.848568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.848850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.848861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.849161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.849170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.849472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.849480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.849762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.849771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.850019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.850027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.850325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.850333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.850615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.850623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.850906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.850915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.371 qpair failed and we were unable to recover it. 00:29:15.371 [2024-11-04 12:33:49.851234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.371 [2024-11-04 12:33:49.851242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.851547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.851556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.851756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.851765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.852086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.852094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.852384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.852392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.852692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.852700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.852861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.852870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.853145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.853154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.853333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.853343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.853518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.853526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.853808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.853817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.854118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.854127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.854285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.854294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.854604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.854612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.854963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.854972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.855129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.855138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.855437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.855445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.855775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.855784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.856092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.856100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.856430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.856439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.856622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.856630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.856850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.856859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.857071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.857079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.857239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.857247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.857401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.857409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.857705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.857712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.857985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.857993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.858301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.858309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.858600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.858608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.858915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.858923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.859249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.859257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.859540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.859548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.859867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.859878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.860063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.860072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.860242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.860251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.372 qpair failed and we were unable to recover it. 00:29:15.372 [2024-11-04 12:33:49.860541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.372 [2024-11-04 12:33:49.860549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.860812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.860820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.861109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.861117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.861430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.861438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.861625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.861634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.861949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.861958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.862119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.862128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.862470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.862478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.862807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.862816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.863091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.863099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.863429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.863438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.863621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.863630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.863674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.863682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.863950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.863958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.864261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.864269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.864557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.864565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.864913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.864923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.865235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.865243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.865608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.865616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.865915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.865924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.866105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.866114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.866294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.866303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.866489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.866498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.866716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.866723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.866882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.866891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.867166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.867173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.867386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.867394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.867584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.867592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.867882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.867890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.868167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.868175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.868509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.868517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.868818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.868826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.868989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.868997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.869294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.869302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.869617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.869625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.869956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.869965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.870250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.870259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.870561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.373 [2024-11-04 12:33:49.870572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.373 qpair failed and we were unable to recover it. 00:29:15.373 [2024-11-04 12:33:49.870876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.870884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.871220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.871228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.871548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.871556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.871870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.871879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.872190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.872197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.872498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.872507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.872699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.872708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.872975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.872984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.873267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.873275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.873575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.873584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.873773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.873783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.874106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.874113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.874426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.874434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.874765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.874773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.875069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.875077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.875383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.875392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.875762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.875771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.876060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.876069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.876377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.876385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.876714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.876723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.877029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.877037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.877338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.877346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.877685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.877693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.878002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.878011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.878321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.878330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.878613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.878622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.878832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.878841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.879049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.879058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.879360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.879368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.879703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.879711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.879991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.880000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.880282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.880290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.880602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.880610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.880916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.880925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.881093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.881103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.881404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.881412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.881684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.881693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.374 [2024-11-04 12:33:49.881985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.374 [2024-11-04 12:33:49.881994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.374 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.882261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.882271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.882538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.882550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.882862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.882870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.883180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.883189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.883497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.883505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.883788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.883798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.884101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.884110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.884411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.884419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.884698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.884707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.885007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.885017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.885319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.885327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.885655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.885663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.885990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.885999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.886305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.886313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.886641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.886650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.886957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.886966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.887275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.887284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.887613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.887622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.887913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.887922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.888232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.888241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.888574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.888582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.888920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.888929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.889236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.889244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.889534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.889543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.889859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.889868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.890175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.890184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.890474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.890482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.890799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.890809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.891121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.891130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.891467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.891476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.891734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.891742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.892051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.892059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.892342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.892350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.892663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.892672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.892978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.892987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.375 [2024-11-04 12:33:49.893275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.375 [2024-11-04 12:33:49.893283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.375 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.893595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.893604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.893913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.893922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.894221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.894231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.894531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.894539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.894806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.894815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.895155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.895166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.895468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.895476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.895775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.895783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.896075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.896083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.896383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.896392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.896658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.896667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.896972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.896982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.897249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.897258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.897560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.897568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.897758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.897767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.898056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.898064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.898327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.898336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.898665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.898674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.898989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.898998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.899305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.899314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.899651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.899660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.899989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.899998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.900312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.900321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.900654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.900664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.900971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.900980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.901279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.901288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.901570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.901579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.376 [2024-11-04 12:33:49.901909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.376 [2024-11-04 12:33:49.901917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.376 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.902226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.902236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.902564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.902573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.902862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.902871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.903196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.903204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.903548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.903557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.903887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.903895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.904210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.904219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.904523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.904531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.904832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.904841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.905152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.905161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.905415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.905424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.905601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.905611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.905805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.905815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.906132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.906141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.906312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.906321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.906640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.906648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.906949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.906958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.907264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.907274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.907535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.907544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.907878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.907887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.908218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.908227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.908436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.908445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.908750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.908760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.908803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.908809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.908962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.908971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.909150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.909159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.909670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.909774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e40000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.910019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.910056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e40000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.910398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.910429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e40000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.910756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.910770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.911078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.911086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.911395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.911403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.911719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.911727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.912042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.912051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.912362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.912370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.912539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.647 [2024-11-04 12:33:49.912547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.647 qpair failed and we were unable to recover it. 00:29:15.647 [2024-11-04 12:33:49.912846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.912855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.913034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.913043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.913301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.913309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.913639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.913647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.913965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.913974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.914281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.914290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.914469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.914478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.914755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.914768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.915098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.915107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.915294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.915303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.915498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.915507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.915673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.915682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.915972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.915981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.916280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.916288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.916479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.916487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.916792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.916801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.917113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.917122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.917458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.917467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.917638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.917648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.917808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.917816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.918119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.918127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.918457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.918467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.918649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.918658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.918983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.918992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.919323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.919332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.919486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.919495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.919767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.919776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.919951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.919961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.920125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.920134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.920480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.920488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.920820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.920829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.921019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.921028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.921343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.921352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.921682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.921690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.921887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.921897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.922155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.922163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.922450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.922458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.922658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.648 [2024-11-04 12:33:49.922668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.648 qpair failed and we were unable to recover it. 00:29:15.648 [2024-11-04 12:33:49.922824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.922833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.923079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.923087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.923249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.923259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.923574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.923583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.923865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.923874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.924030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.924040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.924349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.924357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.924687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.924695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.924998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.925007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.925161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.925171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.925470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.925478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.925790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.925799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.926137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.926146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.926325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.926335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.926655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.926663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.926854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.926864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.927154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.927164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.927470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.927479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.927763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.927772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.928115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.928123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.928428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.928436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.928751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.928759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.928918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.928927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.929207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.929217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.929481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.929489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.929646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.929654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.929964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.929973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.930255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.930263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.930631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.930639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.930847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.930855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.931118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.931126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.931284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.931292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.931624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.931631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.931966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.931975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.932132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.932141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.932457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.932466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.932794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.649 [2024-11-04 12:33:49.932802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.649 qpair failed and we were unable to recover it. 00:29:15.649 [2024-11-04 12:33:49.932982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.932990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.933152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.933160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.933463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.933470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.933637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.933645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.933807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.933815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.934205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.934213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.934545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.934553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.934854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.934862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.935129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.935137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.935470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.935478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.935661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.935669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.936042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.936050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.936396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.936404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.936710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.936722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.937000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.937008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.937192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.937201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.937358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.937366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.937706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.937714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.938053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.938061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.938378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.938386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.938692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.938699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.939052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.939060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.939243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.939251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.939553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.939561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.939914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.939922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.940224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.940232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.940501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.940509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.940839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.940847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.941023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.941031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.941359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.941367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.941648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.941656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.941812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.941820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.942019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.942026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.942235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.942242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.942560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.942567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.650 qpair failed and we were unable to recover it. 00:29:15.650 [2024-11-04 12:33:49.942842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.650 [2024-11-04 12:33:49.942850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.943181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.943189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.943484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.943492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.943801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.943809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.944114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.944122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.944452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.944460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.944779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.944788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.944978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.944985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.945304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.945312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.945604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.945612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.945909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.945918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.946141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.946149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.946334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.946343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.946533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.946541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.946862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.946870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.947202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.947210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.947492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.947500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.947800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.947809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.948021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.948030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.948365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.948373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.948434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.948441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.948600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.948608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.948930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.948938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.949272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.949280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.949565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.949573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.949889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.949897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.950180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.950189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.950495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.950503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.950770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.950778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.951036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.951044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.951239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.951247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.951576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.951584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.951914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.651 [2024-11-04 12:33:49.951922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.651 qpair failed and we were unable to recover it. 00:29:15.651 [2024-11-04 12:33:49.952226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.952235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.952567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.952576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.952907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.952915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.953216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.953224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.953547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.953555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.953845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.953853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.954163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.954171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.954477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.954485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.954776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.954784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.955167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.955175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.955484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.955492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.955807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.955815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.956020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.956028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.956187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.956195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.956538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.956545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.956840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.956848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.957148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.957156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.957489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.957497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.957709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.957717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.957878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.957886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.958232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.958239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.958430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.958438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.958768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.958776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.959119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.959128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.959343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.959351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.959681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.959691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.959975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.959983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.960283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.960292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.960620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.960629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.960956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.960964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.961273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.961281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.961612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.961619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.961958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.961968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.962169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.962177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.962351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.962360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.962641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.962649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.962962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.962970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 [2024-11-04 12:33:49.963158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.652 [2024-11-04 12:33:49.963166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.652 qpair failed and we were unable to recover it. 00:29:15.652 12:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:15.652 [2024-11-04 12:33:49.963456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.963465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.963771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.963779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 12:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:15.653 [2024-11-04 12:33:49.963975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.963983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 12:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:15.653 [2024-11-04 12:33:49.964270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.964279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 12:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:15.653 [2024-11-04 12:33:49.964591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.964600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 12:33:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.653 [2024-11-04 12:33:49.964909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.964917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.965230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.965240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.965529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.965537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.965861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.965870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.966197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.966205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.966521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.966528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.966861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.966869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.967163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.967170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.967347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.967354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.967728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.967735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.968064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.968071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.968383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.968392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.968700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.968708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.969021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.969030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.969313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.969321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.969650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.969657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.969831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.969839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.970136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.970143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.970434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.970441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.970723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.970730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.970774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.970782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.971176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.971184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.971462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.971469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.971633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.971641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.971812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.971820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.972136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.972143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.972464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.972471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.972781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.972789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.973085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.973093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.973338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.973345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.973532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.973540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.653 [2024-11-04 12:33:49.973758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.653 [2024-11-04 12:33:49.973769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.653 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.974128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.974135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.974416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.974423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.974742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.974753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.975072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.975079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.975246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.975257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.975534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.975542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.975846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.975854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.976185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.976192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.976398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.976405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.976679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.976686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.976967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.976976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.977162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.977170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.977339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.977346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.977636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.977644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.977977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.977985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.978330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.978338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.978619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.978626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.978906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.978914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.979101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.979109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.979439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.979447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.979764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.979771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.980172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.980179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.980367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.980374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.980706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.980714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.981035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.981042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.981360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.981367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.981543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.981550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.981607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.981613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.981960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.981970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.982282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.982288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.982611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.982618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.982805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.982811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.983120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.983128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.983333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.983341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.983519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.983527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.983808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.983815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.984109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.984117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.654 [2024-11-04 12:33:49.984412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.654 [2024-11-04 12:33:49.984419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.654 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.984687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.984694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.984874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.984882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.985061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.985068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.985358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.985366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.985652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.985659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.985954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.985962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.986271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.986278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.986464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.986472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.986670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.986676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.986892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.986900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.987194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.987202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.987487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.987495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.987792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.987800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.988134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.988141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.988428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.988435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.988756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.988764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.989055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.989061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.989379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.989387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.989575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.989581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.989884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.989892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.990209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.990217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.990506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.990514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.990827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.990835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.991123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.991130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.991459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.991465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.991779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.991787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.991966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.991974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.992372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.992379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.992660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.992667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.993058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.993066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.993383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.993392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.993580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.993587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.993953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.993961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.994266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.994273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.994472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.994479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.994817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.994825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.995157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.995164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.995399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.655 [2024-11-04 12:33:49.995406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.655 qpair failed and we were unable to recover it. 00:29:15.655 [2024-11-04 12:33:49.995693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:49.995700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:49.996057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:49.996065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:49.996340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:49.996349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:49.996533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:49.996540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:49.996915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:49.996923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:49.997101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:49.997108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:49.997436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:49.997443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:49.997726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:49.997733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:49.998096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:49.998104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:49.998283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:49.998291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:49.998621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:49.998629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:49.998884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:49.998892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:49.999218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:49.999226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:49.999409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:49.999417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:49.999589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:49.999597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:49.999765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:49.999772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:49.999908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:49.999915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:49.999958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:49.999964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:50.000128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:50.000136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:50.000326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:50.000333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:50.000632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:50.000640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:50.000920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:50.000928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:50.001291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:50.001299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:50.001945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:50.001958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:50.002280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:50.002289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:50.002605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:50.002613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:50.002841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:50.002849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:50.003018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:50.003025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:50.003246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:50.003254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:50.003527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:50.003534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:50.003856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:50.003864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:50.004084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:50.004091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:50.004505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.656 [2024-11-04 12:33:50.004516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.656 qpair failed and we were unable to recover it. 00:29:15.656 [2024-11-04 12:33:50.004830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.004837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.005030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.657 [2024-11-04 12:33:50.005039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.005323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.005330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:15.657 [2024-11-04 12:33:50.005581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.005589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.657 [2024-11-04 12:33:50.005909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.005918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.657 [2024-11-04 12:33:50.006232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.006240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.006612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.006619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.006900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.006907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.007140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.007148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.007479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.007486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.007853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.007860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.008031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.008038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.008330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.008337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.008552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.008561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.008756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.008763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.008961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.008969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.009335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.009341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.009648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.009655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.009907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.009915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.010147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.010153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.010218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.010226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.010485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.010493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.010776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.010785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.010982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.010990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.011091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.011098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.011482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.011490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.011696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.011704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.011973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.011980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.012201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.012208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.012485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.012492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.012587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.012594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.012800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.012808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.013147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.013154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.013459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.013465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.013816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.013824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.014167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.014174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.657 [2024-11-04 12:33:50.014470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.657 [2024-11-04 12:33:50.014477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.657 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.014762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.014771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.015087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.015095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.015424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.015432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.015777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.015785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.015972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.015979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.016342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.016348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.016529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.016536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.016877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.016885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.017202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.017209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.017499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.017505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.017855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.017863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.018164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.018171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.018477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.018484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.018773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.018782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.019082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.019090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.019402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.019409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.019696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.019703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.019884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.019894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.020276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.020283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.020608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.020616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.020918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.020927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.021220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.021227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.021528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.021535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.021808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.021815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.022157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.022164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.022472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.022479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.022841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.022848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.023084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.023091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.023412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.023420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.023609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.023617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.023795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.023803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.024135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.024142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.024427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.024434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.024721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.024728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.025039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.025046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.025207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.025214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.025510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.025517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.025807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.658 [2024-11-04 12:33:50.025815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.658 qpair failed and we were unable to recover it. 00:29:15.658 [2024-11-04 12:33:50.026110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.026117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.026413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.026420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.026736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.026749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.027076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.027083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.027369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.027377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.027716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.027723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.028038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.028046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.028339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.028346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.028649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.028656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.029037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.029044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.029372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.029380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.029597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.029611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.029793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.029801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.030163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.030171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.030492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.030499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.030855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.030863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.031041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.031049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.031253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.031260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.031565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.031572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.031885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.031892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.032241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.032248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.032534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.032541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.032852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.032860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.033176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.033184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.033374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.033381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.033675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.033682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.033902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.033909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.034256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.034263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.034488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.034522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.034820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.034828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.035186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.035194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.035390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.035397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.035590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.035597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.035683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.035690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.036000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.036008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.036222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.036229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.036435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.036442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.036611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.036618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.659 [2024-11-04 12:33:50.036688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.659 [2024-11-04 12:33:50.036696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.659 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.036933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.036940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.037105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.037113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.037439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.037446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.037762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.037775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.038092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.038100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.038306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.038314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.038668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.038676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.038965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.038973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.039207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.039216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.039524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.039532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.039577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.039585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.039780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.039788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.040070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.040078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.040423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.040431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.040703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.040710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.040907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.040914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.041047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.041054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.041339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.041346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.041532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.041540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.041879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.041886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.042243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.042250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.042308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.042315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.042668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.042675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 Malloc0 00:29:15.660 [2024-11-04 12:33:50.042962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.042971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.043229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.043237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.043544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.043551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.660 [2024-11-04 12:33:50.043723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.043732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.043933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.043941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.044025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.044033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:15.660 [2024-11-04 12:33:50.044114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.044135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.660 [2024-11-04 12:33:50.044476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.044485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.660 [2024-11-04 12:33:50.044795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.660 [2024-11-04 12:33:50.044803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.660 qpair failed and we were unable to recover it. 00:29:15.660 [2024-11-04 12:33:50.045106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.045113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.045295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.045302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.045718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.045724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.046108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.046116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.046400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.046407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.046600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.046607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.046911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.046918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.047202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.047209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.047291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.047298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.047621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.047629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.047791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.047799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.048091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.048098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.048399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.048406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.048740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.048752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.048936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.048943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.049242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.049249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.049459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.049466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.049685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.049692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.049920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.049929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.050268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.050276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.050353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.661 [2024-11-04 12:33:50.050615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.050622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.050895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.050902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.051178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.051185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.051485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.051492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.051778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.051786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.052107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.052114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.052301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.052314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.052411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.052418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.052575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.052582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.052785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.052792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.052850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.052857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.053214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.053221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.053427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.053435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.053726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.053734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.053919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.053927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.054215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.054222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.054278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.054284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.054635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.054643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.661 [2024-11-04 12:33:50.054821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.661 [2024-11-04 12:33:50.054828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.661 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.055159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.055167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.055486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.055494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.055669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.055677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.055869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.055876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.056188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.056195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.056395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.056402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.056616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.056624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.056925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.056932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.057228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.057235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.057430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.057437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.057819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.057828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.058027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.058035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.058339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.058346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.058512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.058519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.058720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.058727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.059013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.059022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.059328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.059336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.662 [2024-11-04 12:33:50.059654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.059662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.059800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.059808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:15.662 [2024-11-04 12:33:50.060093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.060101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.662 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.662 [2024-11-04 12:33:50.060401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.060410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.060581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.060589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.060821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.060830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.061161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.061168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.061349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.061356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.061685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.061691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.061982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.061990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.062301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.062308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.062646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.062652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.062975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.062983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.063314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.063321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.063612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.063619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.063973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.063981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.064293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.064300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.064615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.064623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.064828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.662 [2024-11-04 12:33:50.064835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.662 qpair failed and we were unable to recover it. 00:29:15.662 [2024-11-04 12:33:50.065216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.065222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.065509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.065516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.065677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.065684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.065873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.065880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.066182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.066189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.066489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.066496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.066809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.066818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.067184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.067191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.067501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.067508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.067819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.067826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.068124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.068131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.068444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.068451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.068785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.068794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.069118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.069125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.069435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.069442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.069769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.069778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.070058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.070065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.070277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.070284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.070547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.070553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.070843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.070851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.071162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.071169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.663 [2024-11-04 12:33:50.071474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.071482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:15.663 [2024-11-04 12:33:50.071818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.071826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.663 [2024-11-04 12:33:50.072129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.072137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.663 [2024-11-04 12:33:50.072423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.072431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.072822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.072829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.073127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.073134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.073315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.073322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.073673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.073680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.074043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.074051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.074320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.074326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.074545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.074552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.074757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.074765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.075129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.075136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.075450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.075458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.075764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.075772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.663 [2024-11-04 12:33:50.075963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.663 [2024-11-04 12:33:50.075969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.663 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.076265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.076274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.076609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.076616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.076923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.076930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.077162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.077169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.077493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.077499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.077687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.077693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.078054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.078063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.078346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.078353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.078541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.078548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.078859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.078867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.079213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.079221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.079526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.079533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.079817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.079824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.080187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.080194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.080402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.080409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.080768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.080775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.081093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.081100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.081210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.081216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.081267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.081273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.081478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.081485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.081662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.081677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.081918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.081926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.082094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.082101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.082264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.082271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.082444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.082451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.082680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.082687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.082924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.082931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.083211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.083218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.664 [2024-11-04 12:33:50.083516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.083524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.083759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.083767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.083817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.083825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:15.664 [2024-11-04 12:33:50.084019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.084026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.664 [2024-11-04 12:33:50.084346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.084353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.664 [2024-11-04 12:33:50.084691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.084698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.084891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.084898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.085271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.085278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.664 qpair failed and we were unable to recover it. 00:29:15.664 [2024-11-04 12:33:50.085570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.664 [2024-11-04 12:33:50.085577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.085758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.085769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.085978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.085987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.086051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.086058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.086356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.086363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.086634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.086640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.086680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.086686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.086866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.086873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.087256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.087263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.087331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.087339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.087573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.087580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.087774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.087782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.088099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.088106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.088154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.088160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.088320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.088327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.088608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.088615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.088956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.088963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.089248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.089255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.089626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.089633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.089922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.089929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.090251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.090258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.090565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.665 [2024-11-04 12:33:50.090572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7e38000b90 with addr=10.0.0.2, port=4420 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.090642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.665 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.665 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:15.665 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.665 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.665 [2024-11-04 12:33:50.101401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.665 [2024-11-04 12:33:50.101488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.665 [2024-11-04 12:33:50.101504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.665 [2024-11-04 12:33:50.101510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.665 [2024-11-04 12:33:50.101515] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.665 [2024-11-04 12:33:50.101531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.665 12:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1828661 00:29:15.665 [2024-11-04 12:33:50.111262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.665 [2024-11-04 12:33:50.111322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.665 [2024-11-04 12:33:50.111336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.665 [2024-11-04 12:33:50.111341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.665 [2024-11-04 12:33:50.111346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.665 [2024-11-04 12:33:50.111357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.121248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.665 [2024-11-04 12:33:50.121299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.665 [2024-11-04 12:33:50.121310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.665 [2024-11-04 12:33:50.121315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.665 [2024-11-04 12:33:50.121319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.665 [2024-11-04 12:33:50.121330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.665 qpair failed and we were unable to recover it. 00:29:15.665 [2024-11-04 12:33:50.131261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.665 [2024-11-04 12:33:50.131320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.665 [2024-11-04 12:33:50.131330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.666 [2024-11-04 12:33:50.131335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.666 [2024-11-04 12:33:50.131340] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.666 [2024-11-04 12:33:50.131350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.666 qpair failed and we were unable to recover it. 00:29:15.666 [2024-11-04 12:33:50.141250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.666 [2024-11-04 12:33:50.141303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.666 [2024-11-04 12:33:50.141314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.666 [2024-11-04 12:33:50.141319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.666 [2024-11-04 12:33:50.141323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.666 [2024-11-04 12:33:50.141334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.666 qpair failed and we were unable to recover it. 00:29:15.666 [2024-11-04 12:33:50.151259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.666 [2024-11-04 12:33:50.151316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.666 [2024-11-04 12:33:50.151327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.666 [2024-11-04 12:33:50.151332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.666 [2024-11-04 12:33:50.151336] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.666 [2024-11-04 12:33:50.151350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.666 qpair failed and we were unable to recover it. 00:29:15.666 [2024-11-04 12:33:50.161288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.666 [2024-11-04 12:33:50.161374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.666 [2024-11-04 12:33:50.161384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.666 [2024-11-04 12:33:50.161389] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.666 [2024-11-04 12:33:50.161394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.666 [2024-11-04 12:33:50.161404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.666 qpair failed and we were unable to recover it. 00:29:15.666 [2024-11-04 12:33:50.171295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.666 [2024-11-04 12:33:50.171369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.666 [2024-11-04 12:33:50.171379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.666 [2024-11-04 12:33:50.171384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.666 [2024-11-04 12:33:50.171388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.666 [2024-11-04 12:33:50.171399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.666 qpair failed and we were unable to recover it. 00:29:15.666 [2024-11-04 12:33:50.181336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.666 [2024-11-04 12:33:50.181388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.666 [2024-11-04 12:33:50.181398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.666 [2024-11-04 12:33:50.181403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.666 [2024-11-04 12:33:50.181408] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.666 [2024-11-04 12:33:50.181418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.666 qpair failed and we were unable to recover it. 00:29:15.666 [2024-11-04 12:33:50.191388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.666 [2024-11-04 12:33:50.191460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.666 [2024-11-04 12:33:50.191471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.666 [2024-11-04 12:33:50.191475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.666 [2024-11-04 12:33:50.191480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.666 [2024-11-04 12:33:50.191490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.666 qpair failed and we were unable to recover it. 00:29:15.666 [2024-11-04 12:33:50.201432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.666 [2024-11-04 12:33:50.201530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.666 [2024-11-04 12:33:50.201552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.666 [2024-11-04 12:33:50.201559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.666 [2024-11-04 12:33:50.201564] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.666 [2024-11-04 12:33:50.201579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.666 qpair failed and we were unable to recover it. 00:29:15.927 [2024-11-04 12:33:50.211436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.927 [2024-11-04 12:33:50.211492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.927 [2024-11-04 12:33:50.211511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.927 [2024-11-04 12:33:50.211517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.927 [2024-11-04 12:33:50.211522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.927 [2024-11-04 12:33:50.211536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.927 qpair failed and we were unable to recover it. 00:29:15.927 [2024-11-04 12:33:50.221469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.927 [2024-11-04 12:33:50.221528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.927 [2024-11-04 12:33:50.221540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.927 [2024-11-04 12:33:50.221546] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.927 [2024-11-04 12:33:50.221550] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.927 [2024-11-04 12:33:50.221562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.927 qpair failed and we were unable to recover it. 00:29:15.927 [2024-11-04 12:33:50.231468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.927 [2024-11-04 12:33:50.231516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.927 [2024-11-04 12:33:50.231526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.927 [2024-11-04 12:33:50.231531] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.927 [2024-11-04 12:33:50.231536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.927 [2024-11-04 12:33:50.231547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.927 qpair failed and we were unable to recover it. 00:29:15.927 [2024-11-04 12:33:50.241380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.927 [2024-11-04 12:33:50.241426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.927 [2024-11-04 12:33:50.241437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.927 [2024-11-04 12:33:50.241443] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.927 [2024-11-04 12:33:50.241453] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.927 [2024-11-04 12:33:50.241465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.927 qpair failed and we were unable to recover it. 00:29:15.927 [2024-11-04 12:33:50.251556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.927 [2024-11-04 12:33:50.251613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.928 [2024-11-04 12:33:50.251624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.928 [2024-11-04 12:33:50.251629] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.928 [2024-11-04 12:33:50.251633] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.928 [2024-11-04 12:33:50.251644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.928 qpair failed and we were unable to recover it. 00:29:15.928 [2024-11-04 12:33:50.261545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.928 [2024-11-04 12:33:50.261599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.928 [2024-11-04 12:33:50.261618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.928 [2024-11-04 12:33:50.261624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.928 [2024-11-04 12:33:50.261629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.928 [2024-11-04 12:33:50.261643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.928 qpair failed and we were unable to recover it. 00:29:15.928 [2024-11-04 12:33:50.271588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.928 [2024-11-04 12:33:50.271637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.928 [2024-11-04 12:33:50.271649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.928 [2024-11-04 12:33:50.271654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.928 [2024-11-04 12:33:50.271658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.928 [2024-11-04 12:33:50.271671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.928 qpair failed and we were unable to recover it. 00:29:15.928 [2024-11-04 12:33:50.281597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.928 [2024-11-04 12:33:50.281642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.928 [2024-11-04 12:33:50.281653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.928 [2024-11-04 12:33:50.281658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.928 [2024-11-04 12:33:50.281663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.928 [2024-11-04 12:33:50.281674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.928 qpair failed and we were unable to recover it. 00:29:15.928 [2024-11-04 12:33:50.291575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.928 [2024-11-04 12:33:50.291633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.928 [2024-11-04 12:33:50.291643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.928 [2024-11-04 12:33:50.291649] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.928 [2024-11-04 12:33:50.291653] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.928 [2024-11-04 12:33:50.291665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.928 qpair failed and we were unable to recover it. 00:29:15.928 [2024-11-04 12:33:50.301790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.928 [2024-11-04 12:33:50.301879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.928 [2024-11-04 12:33:50.301889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.928 [2024-11-04 12:33:50.301894] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.928 [2024-11-04 12:33:50.301899] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.928 [2024-11-04 12:33:50.301909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.928 qpair failed and we were unable to recover it. 00:29:15.928 [2024-11-04 12:33:50.311742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.928 [2024-11-04 12:33:50.311801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.928 [2024-11-04 12:33:50.311813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.928 [2024-11-04 12:33:50.311818] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.928 [2024-11-04 12:33:50.311823] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.928 [2024-11-04 12:33:50.311834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.928 qpair failed and we were unable to recover it. 00:29:15.928 [2024-11-04 12:33:50.321660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.928 [2024-11-04 12:33:50.321707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.928 [2024-11-04 12:33:50.321717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.928 [2024-11-04 12:33:50.321722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.928 [2024-11-04 12:33:50.321727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.928 [2024-11-04 12:33:50.321738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.928 qpair failed and we were unable to recover it. 00:29:15.928 [2024-11-04 12:33:50.331676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.928 [2024-11-04 12:33:50.331731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.928 [2024-11-04 12:33:50.331741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.928 [2024-11-04 12:33:50.331750] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.928 [2024-11-04 12:33:50.331758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.928 [2024-11-04 12:33:50.331769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.928 qpair failed and we were unable to recover it. 00:29:15.928 [2024-11-04 12:33:50.341679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.928 [2024-11-04 12:33:50.341737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.928 [2024-11-04 12:33:50.341750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.928 [2024-11-04 12:33:50.341756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.928 [2024-11-04 12:33:50.341760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.928 [2024-11-04 12:33:50.341771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.928 qpair failed and we were unable to recover it. 00:29:15.928 [2024-11-04 12:33:50.351793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.928 [2024-11-04 12:33:50.351844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.928 [2024-11-04 12:33:50.351854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.928 [2024-11-04 12:33:50.351859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.928 [2024-11-04 12:33:50.351863] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.928 [2024-11-04 12:33:50.351874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.928 qpair failed and we were unable to recover it. 00:29:15.928 [2024-11-04 12:33:50.361825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.928 [2024-11-04 12:33:50.361872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.928 [2024-11-04 12:33:50.361883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.928 [2024-11-04 12:33:50.361888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.928 [2024-11-04 12:33:50.361892] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.928 [2024-11-04 12:33:50.361903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.928 qpair failed and we were unable to recover it. 00:29:15.928 [2024-11-04 12:33:50.371848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.928 [2024-11-04 12:33:50.371902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.928 [2024-11-04 12:33:50.371912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.928 [2024-11-04 12:33:50.371917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.928 [2024-11-04 12:33:50.371921] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.928 [2024-11-04 12:33:50.371932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.928 qpair failed and we were unable to recover it. 00:29:15.928 [2024-11-04 12:33:50.381891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.929 [2024-11-04 12:33:50.381948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.929 [2024-11-04 12:33:50.381959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.929 [2024-11-04 12:33:50.381964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.929 [2024-11-04 12:33:50.381968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.929 [2024-11-04 12:33:50.381979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.929 qpair failed and we were unable to recover it. 00:29:15.929 [2024-11-04 12:33:50.391771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.929 [2024-11-04 12:33:50.391819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.929 [2024-11-04 12:33:50.391830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.929 [2024-11-04 12:33:50.391834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.929 [2024-11-04 12:33:50.391839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.929 [2024-11-04 12:33:50.391849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.929 qpair failed and we were unable to recover it. 00:29:15.929 [2024-11-04 12:33:50.401937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.929 [2024-11-04 12:33:50.401994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.929 [2024-11-04 12:33:50.402004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.929 [2024-11-04 12:33:50.402009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.929 [2024-11-04 12:33:50.402013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.929 [2024-11-04 12:33:50.402024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.929 qpair failed and we were unable to recover it. 00:29:15.929 [2024-11-04 12:33:50.412024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.929 [2024-11-04 12:33:50.412082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.929 [2024-11-04 12:33:50.412092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.929 [2024-11-04 12:33:50.412096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.929 [2024-11-04 12:33:50.412101] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.929 [2024-11-04 12:33:50.412111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.929 qpair failed and we were unable to recover it. 00:29:15.929 [2024-11-04 12:33:50.421988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.929 [2024-11-04 12:33:50.422042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.929 [2024-11-04 12:33:50.422052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.929 [2024-11-04 12:33:50.422060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.929 [2024-11-04 12:33:50.422064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.929 [2024-11-04 12:33:50.422075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.929 qpair failed and we were unable to recover it. 00:29:15.929 [2024-11-04 12:33:50.432027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.929 [2024-11-04 12:33:50.432079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.929 [2024-11-04 12:33:50.432089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.929 [2024-11-04 12:33:50.432094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.929 [2024-11-04 12:33:50.432098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.929 [2024-11-04 12:33:50.432109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.929 qpair failed and we were unable to recover it. 00:29:15.929 [2024-11-04 12:33:50.442033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.929 [2024-11-04 12:33:50.442078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.929 [2024-11-04 12:33:50.442088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.929 [2024-11-04 12:33:50.442093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.929 [2024-11-04 12:33:50.442098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.929 [2024-11-04 12:33:50.442108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.929 qpair failed and we were unable to recover it. 00:29:15.929 [2024-11-04 12:33:50.452076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.929 [2024-11-04 12:33:50.452127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.929 [2024-11-04 12:33:50.452137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.929 [2024-11-04 12:33:50.452142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.929 [2024-11-04 12:33:50.452147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.929 [2024-11-04 12:33:50.452157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.929 qpair failed and we were unable to recover it. 00:29:15.929 [2024-11-04 12:33:50.462114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.929 [2024-11-04 12:33:50.462165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.929 [2024-11-04 12:33:50.462174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.929 [2024-11-04 12:33:50.462180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.929 [2024-11-04 12:33:50.462184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.929 [2024-11-04 12:33:50.462194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.929 qpair failed and we were unable to recover it. 00:29:15.929 [2024-11-04 12:33:50.472121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.929 [2024-11-04 12:33:50.472167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.929 [2024-11-04 12:33:50.472177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.929 [2024-11-04 12:33:50.472182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.929 [2024-11-04 12:33:50.472187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.929 [2024-11-04 12:33:50.472197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.929 qpair failed and we were unable to recover it. 00:29:15.929 [2024-11-04 12:33:50.482167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.929 [2024-11-04 12:33:50.482243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.929 [2024-11-04 12:33:50.482253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.929 [2024-11-04 12:33:50.482258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.929 [2024-11-04 12:33:50.482262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.929 [2024-11-04 12:33:50.482273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.929 qpair failed and we were unable to recover it. 00:29:15.929 [2024-11-04 12:33:50.492179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.929 [2024-11-04 12:33:50.492230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.929 [2024-11-04 12:33:50.492240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.929 [2024-11-04 12:33:50.492245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.929 [2024-11-04 12:33:50.492249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:15.929 [2024-11-04 12:33:50.492259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.929 qpair failed and we were unable to recover it. 00:29:16.217 [2024-11-04 12:33:50.502223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.218 [2024-11-04 12:33:50.502276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.218 [2024-11-04 12:33:50.502286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.218 [2024-11-04 12:33:50.502292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.218 [2024-11-04 12:33:50.502296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.218 [2024-11-04 12:33:50.502307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.218 qpair failed and we were unable to recover it. 00:29:16.218 [2024-11-04 12:33:50.512151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.218 [2024-11-04 12:33:50.512219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.218 [2024-11-04 12:33:50.512229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.218 [2024-11-04 12:33:50.512236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.218 [2024-11-04 12:33:50.512241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.218 [2024-11-04 12:33:50.512251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.218 qpair failed and we were unable to recover it. 00:29:16.218 [2024-11-04 12:33:50.522272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.218 [2024-11-04 12:33:50.522328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.218 [2024-11-04 12:33:50.522338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.218 [2024-11-04 12:33:50.522343] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.218 [2024-11-04 12:33:50.522348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.218 [2024-11-04 12:33:50.522358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.218 qpair failed and we were unable to recover it. 00:29:16.218 [2024-11-04 12:33:50.532311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.218 [2024-11-04 12:33:50.532361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.218 [2024-11-04 12:33:50.532371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.218 [2024-11-04 12:33:50.532376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.218 [2024-11-04 12:33:50.532381] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.218 [2024-11-04 12:33:50.532391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.218 qpair failed and we were unable to recover it. 00:29:16.218 [2024-11-04 12:33:50.542308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.218 [2024-11-04 12:33:50.542357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.218 [2024-11-04 12:33:50.542367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.218 [2024-11-04 12:33:50.542372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.218 [2024-11-04 12:33:50.542376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.218 [2024-11-04 12:33:50.542387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.218 qpair failed and we were unable to recover it. 00:29:16.218 [2024-11-04 12:33:50.552369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.218 [2024-11-04 12:33:50.552449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.218 [2024-11-04 12:33:50.552459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.218 [2024-11-04 12:33:50.552464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.218 [2024-11-04 12:33:50.552468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.218 [2024-11-04 12:33:50.552479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.218 qpair failed and we were unable to recover it. 00:29:16.218 [2024-11-04 12:33:50.562367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.218 [2024-11-04 12:33:50.562416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.218 [2024-11-04 12:33:50.562426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.218 [2024-11-04 12:33:50.562431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.218 [2024-11-04 12:33:50.562436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.218 [2024-11-04 12:33:50.562446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.218 qpair failed and we were unable to recover it. 00:29:16.218 [2024-11-04 12:33:50.572413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.218 [2024-11-04 12:33:50.572466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.218 [2024-11-04 12:33:50.572476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.218 [2024-11-04 12:33:50.572482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.218 [2024-11-04 12:33:50.572487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.218 [2024-11-04 12:33:50.572497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.218 qpair failed and we were unable to recover it. 00:29:16.218 [2024-11-04 12:33:50.582457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.218 [2024-11-04 12:33:50.582506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.218 [2024-11-04 12:33:50.582516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.218 [2024-11-04 12:33:50.582521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.218 [2024-11-04 12:33:50.582526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.218 [2024-11-04 12:33:50.582536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.218 qpair failed and we were unable to recover it. 00:29:16.218 [2024-11-04 12:33:50.592462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.218 [2024-11-04 12:33:50.592506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.218 [2024-11-04 12:33:50.592516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.218 [2024-11-04 12:33:50.592521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.218 [2024-11-04 12:33:50.592525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.218 [2024-11-04 12:33:50.592536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.218 qpair failed and we were unable to recover it. 00:29:16.218 [2024-11-04 12:33:50.602491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.218 [2024-11-04 12:33:50.602543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.218 [2024-11-04 12:33:50.602556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.218 [2024-11-04 12:33:50.602561] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.218 [2024-11-04 12:33:50.602566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.218 [2024-11-04 12:33:50.602577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.218 qpair failed and we were unable to recover it. 00:29:16.218 [2024-11-04 12:33:50.612530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.218 [2024-11-04 12:33:50.612602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.218 [2024-11-04 12:33:50.612612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.218 [2024-11-04 12:33:50.612617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.218 [2024-11-04 12:33:50.612622] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.218 [2024-11-04 12:33:50.612632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.218 qpair failed and we were unable to recover it. 00:29:16.218 [2024-11-04 12:33:50.622571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.218 [2024-11-04 12:33:50.622621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.218 [2024-11-04 12:33:50.622631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.218 [2024-11-04 12:33:50.622636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.218 [2024-11-04 12:33:50.622641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.218 [2024-11-04 12:33:50.622652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.218 qpair failed and we were unable to recover it. 00:29:16.218 [2024-11-04 12:33:50.632578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.218 [2024-11-04 12:33:50.632628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.218 [2024-11-04 12:33:50.632638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.218 [2024-11-04 12:33:50.632643] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.218 [2024-11-04 12:33:50.632647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.218 [2024-11-04 12:33:50.632657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.218 qpair failed and we were unable to recover it. 00:29:16.218 [2024-11-04 12:33:50.642489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.218 [2024-11-04 12:33:50.642537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.218 [2024-11-04 12:33:50.642548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.218 [2024-11-04 12:33:50.642554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.218 [2024-11-04 12:33:50.642558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.218 [2024-11-04 12:33:50.642571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.218 qpair failed and we were unable to recover it. 00:29:16.219 [2024-11-04 12:33:50.652626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.219 [2024-11-04 12:33:50.652676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.219 [2024-11-04 12:33:50.652687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.219 [2024-11-04 12:33:50.652692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.219 [2024-11-04 12:33:50.652696] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.219 [2024-11-04 12:33:50.652707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.219 qpair failed and we were unable to recover it. 00:29:16.219 [2024-11-04 12:33:50.662667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.219 [2024-11-04 12:33:50.662723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.219 [2024-11-04 12:33:50.662733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.219 [2024-11-04 12:33:50.662738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.219 [2024-11-04 12:33:50.662743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.219 [2024-11-04 12:33:50.662763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.219 qpair failed and we were unable to recover it. 00:29:16.219 [2024-11-04 12:33:50.672694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.219 [2024-11-04 12:33:50.672750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.219 [2024-11-04 12:33:50.672761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.219 [2024-11-04 12:33:50.672766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.219 [2024-11-04 12:33:50.672770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.219 [2024-11-04 12:33:50.672781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.219 qpair failed and we were unable to recover it. 00:29:16.219 [2024-11-04 12:33:50.682717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.219 [2024-11-04 12:33:50.682771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.219 [2024-11-04 12:33:50.682781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.219 [2024-11-04 12:33:50.682786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.219 [2024-11-04 12:33:50.682790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.219 [2024-11-04 12:33:50.682801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.219 qpair failed and we were unable to recover it. 00:29:16.219 [2024-11-04 12:33:50.692688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.219 [2024-11-04 12:33:50.692776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.219 [2024-11-04 12:33:50.692789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.219 [2024-11-04 12:33:50.692794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.219 [2024-11-04 12:33:50.692799] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.219 [2024-11-04 12:33:50.692810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.219 qpair failed and we were unable to recover it. 00:29:16.219 [2024-11-04 12:33:50.702761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.219 [2024-11-04 12:33:50.702814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.219 [2024-11-04 12:33:50.702825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.219 [2024-11-04 12:33:50.702830] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.219 [2024-11-04 12:33:50.702834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.219 [2024-11-04 12:33:50.702845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.219 qpair failed and we were unable to recover it. 00:29:16.219 [2024-11-04 12:33:50.712840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.219 [2024-11-04 12:33:50.712886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.219 [2024-11-04 12:33:50.712896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.219 [2024-11-04 12:33:50.712901] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.219 [2024-11-04 12:33:50.712906] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.219 [2024-11-04 12:33:50.712916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.219 qpair failed and we were unable to recover it. 00:29:16.219 [2024-11-04 12:33:50.722690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.219 [2024-11-04 12:33:50.722734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.219 [2024-11-04 12:33:50.722744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.219 [2024-11-04 12:33:50.722753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.219 [2024-11-04 12:33:50.722758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.219 [2024-11-04 12:33:50.722769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.219 qpair failed and we were unable to recover it. 00:29:16.219 [2024-11-04 12:33:50.732852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.219 [2024-11-04 12:33:50.732923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.219 [2024-11-04 12:33:50.732933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.219 [2024-11-04 12:33:50.732938] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.219 [2024-11-04 12:33:50.732942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.219 [2024-11-04 12:33:50.732956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.219 qpair failed and we were unable to recover it. 00:29:16.219 [2024-11-04 12:33:50.742895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.219 [2024-11-04 12:33:50.742958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.219 [2024-11-04 12:33:50.742967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.219 [2024-11-04 12:33:50.742972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.219 [2024-11-04 12:33:50.742977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.219 [2024-11-04 12:33:50.742987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.219 qpair failed and we were unable to recover it. 00:29:16.219 [2024-11-04 12:33:50.752892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.219 [2024-11-04 12:33:50.752941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.219 [2024-11-04 12:33:50.752951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.219 [2024-11-04 12:33:50.752956] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.219 [2024-11-04 12:33:50.752961] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.219 [2024-11-04 12:33:50.752971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.219 qpair failed and we were unable to recover it. 00:29:16.219 [2024-11-04 12:33:50.762829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.219 [2024-11-04 12:33:50.762880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.219 [2024-11-04 12:33:50.762890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.219 [2024-11-04 12:33:50.762895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.219 [2024-11-04 12:33:50.762899] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.219 [2024-11-04 12:33:50.762910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.219 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.772973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.773023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.773033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.773038] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.773043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.773053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.487 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.782990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.783044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.783056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.783061] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.783065] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.783075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.487 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.793026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.793074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.793085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.793090] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.793094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.793105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.487 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.803063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.803119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.803129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.803134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.803138] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.803149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.487 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.813094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.813146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.813156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.813161] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.813166] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.813176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.487 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.823129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.823182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.823192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.823197] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.823204] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.823215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.487 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.833088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.833186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.833196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.833201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.833205] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.833216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.487 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.843160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.843209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.843220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.843224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.843229] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.843239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.487 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.853193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.853244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.853254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.853259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.853263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.853273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.487 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.863211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.863256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.863266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.863271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.863275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.863286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.487 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.873113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.873169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.873180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.873185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.873189] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.873199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.487 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.883167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.883256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.883266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.883271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.883276] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.883287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.487 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.893301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.893353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.893362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.893367] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.893371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.893381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.487 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.903336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.903384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.903395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.903400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.903404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.903414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.487 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.913343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.913392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.913401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.913409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.913413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.913424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.487 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.923387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.923439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.923449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.923454] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.923458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.923468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.487 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.933392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.933448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.933467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.933473] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.933478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.933492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.487 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.943443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.943524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.943543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.943549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.943554] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.943568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.487 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.953464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.953507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.953519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.953524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.953529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.953540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.487 qpair failed and we were unable to recover it. 00:29:16.487 [2024-11-04 12:33:50.963481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.487 [2024-11-04 12:33:50.963528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.487 [2024-11-04 12:33:50.963539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.487 [2024-11-04 12:33:50.963544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.487 [2024-11-04 12:33:50.963548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.487 [2024-11-04 12:33:50.963559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.488 qpair failed and we were unable to recover it. 00:29:16.488 [2024-11-04 12:33:50.973549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.488 [2024-11-04 12:33:50.973624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.488 [2024-11-04 12:33:50.973643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.488 [2024-11-04 12:33:50.973649] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.488 [2024-11-04 12:33:50.973654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.488 [2024-11-04 12:33:50.973668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.488 qpair failed and we were unable to recover it. 00:29:16.488 [2024-11-04 12:33:50.983558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.488 [2024-11-04 12:33:50.983616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.488 [2024-11-04 12:33:50.983627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.488 [2024-11-04 12:33:50.983633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.488 [2024-11-04 12:33:50.983637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.488 [2024-11-04 12:33:50.983649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.488 qpair failed and we were unable to recover it. 00:29:16.488 [2024-11-04 12:33:50.993565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.488 [2024-11-04 12:33:50.993611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.488 [2024-11-04 12:33:50.993621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.488 [2024-11-04 12:33:50.993626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.488 [2024-11-04 12:33:50.993631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.488 [2024-11-04 12:33:50.993642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.488 qpair failed and we were unable to recover it. 00:29:16.488 [2024-11-04 12:33:51.003593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.488 [2024-11-04 12:33:51.003644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.488 [2024-11-04 12:33:51.003655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.488 [2024-11-04 12:33:51.003663] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.488 [2024-11-04 12:33:51.003668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.488 [2024-11-04 12:33:51.003678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.488 qpair failed and we were unable to recover it. 00:29:16.488 [2024-11-04 12:33:51.013622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.488 [2024-11-04 12:33:51.013712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.488 [2024-11-04 12:33:51.013723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.488 [2024-11-04 12:33:51.013727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.488 [2024-11-04 12:33:51.013732] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.488 [2024-11-04 12:33:51.013742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.488 qpair failed and we were unable to recover it. 00:29:16.488 [2024-11-04 12:33:51.023592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.488 [2024-11-04 12:33:51.023641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.488 [2024-11-04 12:33:51.023651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.488 [2024-11-04 12:33:51.023656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.488 [2024-11-04 12:33:51.023660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.488 [2024-11-04 12:33:51.023671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.488 qpair failed and we were unable to recover it. 00:29:16.488 [2024-11-04 12:33:51.033545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.488 [2024-11-04 12:33:51.033593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.488 [2024-11-04 12:33:51.033605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.488 [2024-11-04 12:33:51.033611] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.488 [2024-11-04 12:33:51.033616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.488 [2024-11-04 12:33:51.033627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.488 qpair failed and we were unable to recover it. 00:29:16.488 [2024-11-04 12:33:51.043570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.488 [2024-11-04 12:33:51.043614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.488 [2024-11-04 12:33:51.043625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.488 [2024-11-04 12:33:51.043630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.488 [2024-11-04 12:33:51.043634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.488 [2024-11-04 12:33:51.043645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.488 qpair failed and we were unable to recover it. 00:29:16.488 [2024-11-04 12:33:51.053730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.488 [2024-11-04 12:33:51.053785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.488 [2024-11-04 12:33:51.053796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.488 [2024-11-04 12:33:51.053801] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.488 [2024-11-04 12:33:51.053805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.488 [2024-11-04 12:33:51.053816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.488 qpair failed and we were unable to recover it. 00:29:16.750 [2024-11-04 12:33:51.063730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.750 [2024-11-04 12:33:51.063781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.750 [2024-11-04 12:33:51.063792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.750 [2024-11-04 12:33:51.063797] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.750 [2024-11-04 12:33:51.063801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.750 [2024-11-04 12:33:51.063812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.750 qpair failed and we were unable to recover it. 00:29:16.750 [2024-11-04 12:33:51.073793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.750 [2024-11-04 12:33:51.073849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.750 [2024-11-04 12:33:51.073859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.750 [2024-11-04 12:33:51.073864] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.750 [2024-11-04 12:33:51.073869] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.750 [2024-11-04 12:33:51.073879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.750 qpair failed and we were unable to recover it. 00:29:16.750 [2024-11-04 12:33:51.083800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.750 [2024-11-04 12:33:51.083848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.750 [2024-11-04 12:33:51.083858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.750 [2024-11-04 12:33:51.083863] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.750 [2024-11-04 12:33:51.083868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.750 [2024-11-04 12:33:51.083878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.751 qpair failed and we were unable to recover it. 00:29:16.751 [2024-11-04 12:33:51.093839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.751 [2024-11-04 12:33:51.093894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.751 [2024-11-04 12:33:51.093907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.751 [2024-11-04 12:33:51.093912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.751 [2024-11-04 12:33:51.093916] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.751 [2024-11-04 12:33:51.093928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.751 qpair failed and we were unable to recover it. 00:29:16.751 [2024-11-04 12:33:51.103882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.751 [2024-11-04 12:33:51.103929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.751 [2024-11-04 12:33:51.103939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.751 [2024-11-04 12:33:51.103944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.751 [2024-11-04 12:33:51.103948] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.751 [2024-11-04 12:33:51.103959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.751 qpair failed and we were unable to recover it. 00:29:16.751 [2024-11-04 12:33:51.113774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.751 [2024-11-04 12:33:51.113827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.751 [2024-11-04 12:33:51.113837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.751 [2024-11-04 12:33:51.113842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.751 [2024-11-04 12:33:51.113846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.751 [2024-11-04 12:33:51.113857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.751 qpair failed and we were unable to recover it. 00:29:16.751 [2024-11-04 12:33:51.123935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.751 [2024-11-04 12:33:51.123980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.751 [2024-11-04 12:33:51.123990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.751 [2024-11-04 12:33:51.123995] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.751 [2024-11-04 12:33:51.123999] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.751 [2024-11-04 12:33:51.124010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.751 qpair failed and we were unable to recover it. 00:29:16.751 [2024-11-04 12:33:51.133963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.751 [2024-11-04 12:33:51.134013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.751 [2024-11-04 12:33:51.134023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.751 [2024-11-04 12:33:51.134028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.751 [2024-11-04 12:33:51.134032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.751 [2024-11-04 12:33:51.134045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.751 qpair failed and we were unable to recover it. 00:29:16.751 [2024-11-04 12:33:51.143999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.751 [2024-11-04 12:33:51.144051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.751 [2024-11-04 12:33:51.144061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.751 [2024-11-04 12:33:51.144066] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.751 [2024-11-04 12:33:51.144070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.751 [2024-11-04 12:33:51.144081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.751 qpair failed and we were unable to recover it. 00:29:16.751 [2024-11-04 12:33:51.154004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.751 [2024-11-04 12:33:51.154056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.751 [2024-11-04 12:33:51.154066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.751 [2024-11-04 12:33:51.154071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.751 [2024-11-04 12:33:51.154075] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.751 [2024-11-04 12:33:51.154085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.751 qpair failed and we were unable to recover it. 00:29:16.751 [2024-11-04 12:33:51.164032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.751 [2024-11-04 12:33:51.164084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.751 [2024-11-04 12:33:51.164094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.751 [2024-11-04 12:33:51.164099] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.751 [2024-11-04 12:33:51.164104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.751 [2024-11-04 12:33:51.164114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.751 qpair failed and we were unable to recover it. 00:29:16.751 [2024-11-04 12:33:51.174074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.751 [2024-11-04 12:33:51.174125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.751 [2024-11-04 12:33:51.174135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.751 [2024-11-04 12:33:51.174140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.751 [2024-11-04 12:33:51.174144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.751 [2024-11-04 12:33:51.174155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.751 qpair failed and we were unable to recover it. 00:29:16.751 [2024-11-04 12:33:51.184122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.751 [2024-11-04 12:33:51.184176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.751 [2024-11-04 12:33:51.184189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.751 [2024-11-04 12:33:51.184194] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.751 [2024-11-04 12:33:51.184198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.751 [2024-11-04 12:33:51.184208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.751 qpair failed and we were unable to recover it. 00:29:16.751 [2024-11-04 12:33:51.194125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.751 [2024-11-04 12:33:51.194170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.751 [2024-11-04 12:33:51.194180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.751 [2024-11-04 12:33:51.194185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.751 [2024-11-04 12:33:51.194189] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.751 [2024-11-04 12:33:51.194199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.751 qpair failed and we were unable to recover it. 00:29:16.751 [2024-11-04 12:33:51.204132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.751 [2024-11-04 12:33:51.204177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.751 [2024-11-04 12:33:51.204186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.751 [2024-11-04 12:33:51.204191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.751 [2024-11-04 12:33:51.204196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.751 [2024-11-04 12:33:51.204206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.751 qpair failed and we were unable to recover it. 00:29:16.751 [2024-11-04 12:33:51.214178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.751 [2024-11-04 12:33:51.214230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.751 [2024-11-04 12:33:51.214240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.751 [2024-11-04 12:33:51.214245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.751 [2024-11-04 12:33:51.214249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.751 [2024-11-04 12:33:51.214259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.751 qpair failed and we were unable to recover it. 00:29:16.751 [2024-11-04 12:33:51.224130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.751 [2024-11-04 12:33:51.224183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.752 [2024-11-04 12:33:51.224212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.752 [2024-11-04 12:33:51.224217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.752 [2024-11-04 12:33:51.224222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.752 [2024-11-04 12:33:51.224242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.752 qpair failed and we were unable to recover it. 00:29:16.752 [2024-11-04 12:33:51.234191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.752 [2024-11-04 12:33:51.234246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.752 [2024-11-04 12:33:51.234257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.752 [2024-11-04 12:33:51.234261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.752 [2024-11-04 12:33:51.234266] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.752 [2024-11-04 12:33:51.234277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.752 qpair failed and we were unable to recover it. 00:29:16.752 [2024-11-04 12:33:51.244256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.752 [2024-11-04 12:33:51.244312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.752 [2024-11-04 12:33:51.244322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.752 [2024-11-04 12:33:51.244327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.752 [2024-11-04 12:33:51.244331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.752 [2024-11-04 12:33:51.244342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.752 qpair failed and we were unable to recover it. 00:29:16.752 [2024-11-04 12:33:51.254279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.752 [2024-11-04 12:33:51.254334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.752 [2024-11-04 12:33:51.254345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.752 [2024-11-04 12:33:51.254350] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.752 [2024-11-04 12:33:51.254354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.752 [2024-11-04 12:33:51.254365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.752 qpair failed and we were unable to recover it. 00:29:16.752 [2024-11-04 12:33:51.264336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.752 [2024-11-04 12:33:51.264386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.752 [2024-11-04 12:33:51.264396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.752 [2024-11-04 12:33:51.264401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.752 [2024-11-04 12:33:51.264405] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.752 [2024-11-04 12:33:51.264416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.752 qpair failed and we were unable to recover it. 00:29:16.752 [2024-11-04 12:33:51.274315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.752 [2024-11-04 12:33:51.274403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.752 [2024-11-04 12:33:51.274415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.752 [2024-11-04 12:33:51.274420] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.752 [2024-11-04 12:33:51.274424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.752 [2024-11-04 12:33:51.274435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.752 qpair failed and we were unable to recover it. 00:29:16.752 [2024-11-04 12:33:51.284383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.752 [2024-11-04 12:33:51.284432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.752 [2024-11-04 12:33:51.284442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.752 [2024-11-04 12:33:51.284447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.752 [2024-11-04 12:33:51.284451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.752 [2024-11-04 12:33:51.284462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.752 qpair failed and we were unable to recover it. 00:29:16.752 [2024-11-04 12:33:51.294385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.752 [2024-11-04 12:33:51.294471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.752 [2024-11-04 12:33:51.294484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.752 [2024-11-04 12:33:51.294489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.752 [2024-11-04 12:33:51.294494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.752 [2024-11-04 12:33:51.294505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.752 qpair failed and we were unable to recover it. 00:29:16.752 [2024-11-04 12:33:51.304413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.752 [2024-11-04 12:33:51.304472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.752 [2024-11-04 12:33:51.304491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.752 [2024-11-04 12:33:51.304497] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.752 [2024-11-04 12:33:51.304502] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.752 [2024-11-04 12:33:51.304516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.752 qpair failed and we were unable to recover it. 00:29:16.752 [2024-11-04 12:33:51.314443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.752 [2024-11-04 12:33:51.314491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.752 [2024-11-04 12:33:51.314510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.752 [2024-11-04 12:33:51.314516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.752 [2024-11-04 12:33:51.314525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:16.752 [2024-11-04 12:33:51.314539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.752 qpair failed and we were unable to recover it. 00:29:17.013 [2024-11-04 12:33:51.324434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.013 [2024-11-04 12:33:51.324491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.013 [2024-11-04 12:33:51.324502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.013 [2024-11-04 12:33:51.324507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.013 [2024-11-04 12:33:51.324512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.013 [2024-11-04 12:33:51.324523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.013 qpair failed and we were unable to recover it. 00:29:17.013 [2024-11-04 12:33:51.334505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.013 [2024-11-04 12:33:51.334556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.013 [2024-11-04 12:33:51.334567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.013 [2024-11-04 12:33:51.334572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.013 [2024-11-04 12:33:51.334576] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.013 [2024-11-04 12:33:51.334587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.013 qpair failed and we were unable to recover it. 00:29:17.013 [2024-11-04 12:33:51.344523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.013 [2024-11-04 12:33:51.344578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.013 [2024-11-04 12:33:51.344588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.013 [2024-11-04 12:33:51.344594] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.013 [2024-11-04 12:33:51.344598] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.013 [2024-11-04 12:33:51.344608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.013 qpair failed and we were unable to recover it. 00:29:17.013 [2024-11-04 12:33:51.354534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.013 [2024-11-04 12:33:51.354580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.013 [2024-11-04 12:33:51.354590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.013 [2024-11-04 12:33:51.354595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.013 [2024-11-04 12:33:51.354600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.013 [2024-11-04 12:33:51.354610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.013 qpair failed and we were unable to recover it. 00:29:17.013 [2024-11-04 12:33:51.364578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.013 [2024-11-04 12:33:51.364635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.013 [2024-11-04 12:33:51.364645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.013 [2024-11-04 12:33:51.364650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.013 [2024-11-04 12:33:51.364655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.013 [2024-11-04 12:33:51.364665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.013 qpair failed and we were unable to recover it. 00:29:17.013 [2024-11-04 12:33:51.374625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.013 [2024-11-04 12:33:51.374674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.013 [2024-11-04 12:33:51.374684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.014 [2024-11-04 12:33:51.374689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.014 [2024-11-04 12:33:51.374694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.014 [2024-11-04 12:33:51.374704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.014 qpair failed and we were unable to recover it. 00:29:17.014 [2024-11-04 12:33:51.384692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.014 [2024-11-04 12:33:51.384763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.014 [2024-11-04 12:33:51.384774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.014 [2024-11-04 12:33:51.384779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.014 [2024-11-04 12:33:51.384783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.014 [2024-11-04 12:33:51.384794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.014 qpair failed and we were unable to recover it. 00:29:17.014 [2024-11-04 12:33:51.394682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.014 [2024-11-04 12:33:51.394729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.014 [2024-11-04 12:33:51.394740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.014 [2024-11-04 12:33:51.394749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.014 [2024-11-04 12:33:51.394754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.014 [2024-11-04 12:33:51.394765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.014 qpair failed and we were unable to recover it. 00:29:17.014 [2024-11-04 12:33:51.404710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.014 [2024-11-04 12:33:51.404771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.014 [2024-11-04 12:33:51.404782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.014 [2024-11-04 12:33:51.404787] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.014 [2024-11-04 12:33:51.404794] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.014 [2024-11-04 12:33:51.404806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.014 qpair failed and we were unable to recover it. 00:29:17.014 [2024-11-04 12:33:51.414770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.014 [2024-11-04 12:33:51.414822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.014 [2024-11-04 12:33:51.414832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.014 [2024-11-04 12:33:51.414837] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.014 [2024-11-04 12:33:51.414841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.014 [2024-11-04 12:33:51.414852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.014 qpair failed and we were unable to recover it. 00:29:17.014 [2024-11-04 12:33:51.424774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.014 [2024-11-04 12:33:51.424822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.014 [2024-11-04 12:33:51.424832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.014 [2024-11-04 12:33:51.424837] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.014 [2024-11-04 12:33:51.424842] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.014 [2024-11-04 12:33:51.424852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.014 qpair failed and we were unable to recover it. 00:29:17.014 [2024-11-04 12:33:51.434782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.014 [2024-11-04 12:33:51.434845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.014 [2024-11-04 12:33:51.434855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.014 [2024-11-04 12:33:51.434860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.014 [2024-11-04 12:33:51.434865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.014 [2024-11-04 12:33:51.434875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.014 qpair failed and we were unable to recover it. 00:29:17.014 [2024-11-04 12:33:51.444766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.014 [2024-11-04 12:33:51.444815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.014 [2024-11-04 12:33:51.444825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.014 [2024-11-04 12:33:51.444830] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.014 [2024-11-04 12:33:51.444835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.014 [2024-11-04 12:33:51.444845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.014 qpair failed and we were unable to recover it. 00:29:17.014 [2024-11-04 12:33:51.454840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.014 [2024-11-04 12:33:51.454896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.014 [2024-11-04 12:33:51.454906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.014 [2024-11-04 12:33:51.454911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.014 [2024-11-04 12:33:51.454915] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.014 [2024-11-04 12:33:51.454926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.014 qpair failed and we were unable to recover it. 00:29:17.014 [2024-11-04 12:33:51.464890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.014 [2024-11-04 12:33:51.464941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.014 [2024-11-04 12:33:51.464951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.014 [2024-11-04 12:33:51.464956] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.014 [2024-11-04 12:33:51.464960] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.014 [2024-11-04 12:33:51.464970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.014 qpair failed and we were unable to recover it. 00:29:17.014 [2024-11-04 12:33:51.474893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.014 [2024-11-04 12:33:51.474957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.014 [2024-11-04 12:33:51.474967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.014 [2024-11-04 12:33:51.474972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.014 [2024-11-04 12:33:51.474976] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.014 [2024-11-04 12:33:51.474986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.014 qpair failed and we were unable to recover it. 00:29:17.014 [2024-11-04 12:33:51.484900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.014 [2024-11-04 12:33:51.484979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.014 [2024-11-04 12:33:51.484989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.014 [2024-11-04 12:33:51.484994] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.014 [2024-11-04 12:33:51.484998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.014 [2024-11-04 12:33:51.485008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.014 qpair failed and we were unable to recover it. 00:29:17.014 [2024-11-04 12:33:51.494964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.014 [2024-11-04 12:33:51.495011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.014 [2024-11-04 12:33:51.495021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.014 [2024-11-04 12:33:51.495028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.014 [2024-11-04 12:33:51.495033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.014 [2024-11-04 12:33:51.495043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.014 qpair failed and we were unable to recover it. 00:29:17.014 [2024-11-04 12:33:51.505005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.014 [2024-11-04 12:33:51.505054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.014 [2024-11-04 12:33:51.505064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.014 [2024-11-04 12:33:51.505069] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.014 [2024-11-04 12:33:51.505074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.014 [2024-11-04 12:33:51.505084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.014 qpair failed and we were unable to recover it. 00:29:17.014 [2024-11-04 12:33:51.514930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.014 [2024-11-04 12:33:51.514986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.014 [2024-11-04 12:33:51.514996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.014 [2024-11-04 12:33:51.515001] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.014 [2024-11-04 12:33:51.515005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.014 [2024-11-04 12:33:51.515016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.014 qpair failed and we were unable to recover it. 00:29:17.014 [2024-11-04 12:33:51.525004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.014 [2024-11-04 12:33:51.525053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.014 [2024-11-04 12:33:51.525063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.014 [2024-11-04 12:33:51.525068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.014 [2024-11-04 12:33:51.525072] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.014 [2024-11-04 12:33:51.525083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.014 qpair failed and we were unable to recover it. 00:29:17.014 [2024-11-04 12:33:51.535074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.015 [2024-11-04 12:33:51.535166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.015 [2024-11-04 12:33:51.535176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.015 [2024-11-04 12:33:51.535181] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.015 [2024-11-04 12:33:51.535185] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.015 [2024-11-04 12:33:51.535196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.015 qpair failed and we were unable to recover it. 00:29:17.015 [2024-11-04 12:33:51.545115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.015 [2024-11-04 12:33:51.545214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.015 [2024-11-04 12:33:51.545225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.015 [2024-11-04 12:33:51.545230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.015 [2024-11-04 12:33:51.545234] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.015 [2024-11-04 12:33:51.545244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.015 qpair failed and we were unable to recover it. 00:29:17.015 [2024-11-04 12:33:51.555086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.015 [2024-11-04 12:33:51.555138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.015 [2024-11-04 12:33:51.555148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.015 [2024-11-04 12:33:51.555153] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.015 [2024-11-04 12:33:51.555157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.015 [2024-11-04 12:33:51.555168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.015 qpair failed and we were unable to recover it. 00:29:17.015 [2024-11-04 12:33:51.565115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.015 [2024-11-04 12:33:51.565159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.015 [2024-11-04 12:33:51.565169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.015 [2024-11-04 12:33:51.565174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.015 [2024-11-04 12:33:51.565179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.015 [2024-11-04 12:33:51.565189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.015 qpair failed and we were unable to recover it. 00:29:17.015 [2024-11-04 12:33:51.575170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.015 [2024-11-04 12:33:51.575219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.015 [2024-11-04 12:33:51.575229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.015 [2024-11-04 12:33:51.575234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.015 [2024-11-04 12:33:51.575238] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.015 [2024-11-04 12:33:51.575249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.015 qpair failed and we were unable to recover it. 00:29:17.277 [2024-11-04 12:33:51.585198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.277 [2024-11-04 12:33:51.585256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.277 [2024-11-04 12:33:51.585266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.277 [2024-11-04 12:33:51.585273] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.277 [2024-11-04 12:33:51.585278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.277 [2024-11-04 12:33:51.585289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.277 qpair failed and we were unable to recover it. 00:29:17.277 [2024-11-04 12:33:51.595120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.277 [2024-11-04 12:33:51.595167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.277 [2024-11-04 12:33:51.595177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.277 [2024-11-04 12:33:51.595182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.277 [2024-11-04 12:33:51.595187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.277 [2024-11-04 12:33:51.595197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.277 qpair failed and we were unable to recover it. 00:29:17.277 [2024-11-04 12:33:51.605264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.277 [2024-11-04 12:33:51.605349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.277 [2024-11-04 12:33:51.605359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.277 [2024-11-04 12:33:51.605364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.277 [2024-11-04 12:33:51.605368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.277 [2024-11-04 12:33:51.605379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.277 qpair failed and we were unable to recover it. 00:29:17.277 [2024-11-04 12:33:51.615309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.277 [2024-11-04 12:33:51.615357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.277 [2024-11-04 12:33:51.615367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.277 [2024-11-04 12:33:51.615372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.277 [2024-11-04 12:33:51.615377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.277 [2024-11-04 12:33:51.615387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.277 qpair failed and we were unable to recover it. 00:29:17.277 [2024-11-04 12:33:51.625220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.277 [2024-11-04 12:33:51.625269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.277 [2024-11-04 12:33:51.625279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.277 [2024-11-04 12:33:51.625284] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.278 [2024-11-04 12:33:51.625288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.278 [2024-11-04 12:33:51.625298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.278 qpair failed and we were unable to recover it. 00:29:17.278 [2024-11-04 12:33:51.635349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.278 [2024-11-04 12:33:51.635394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.278 [2024-11-04 12:33:51.635405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.278 [2024-11-04 12:33:51.635409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.278 [2024-11-04 12:33:51.635414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.278 [2024-11-04 12:33:51.635424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.278 qpair failed and we were unable to recover it. 00:29:17.278 [2024-11-04 12:33:51.645355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.278 [2024-11-04 12:33:51.645413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.278 [2024-11-04 12:33:51.645423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.278 [2024-11-04 12:33:51.645428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.278 [2024-11-04 12:33:51.645432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.278 [2024-11-04 12:33:51.645442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.278 qpair failed and we were unable to recover it. 00:29:17.278 [2024-11-04 12:33:51.655404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.278 [2024-11-04 12:33:51.655455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.278 [2024-11-04 12:33:51.655465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.278 [2024-11-04 12:33:51.655470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.278 [2024-11-04 12:33:51.655474] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.278 [2024-11-04 12:33:51.655485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.278 qpair failed and we were unable to recover it. 00:29:17.278 [2024-11-04 12:33:51.665404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.278 [2024-11-04 12:33:51.665452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.278 [2024-11-04 12:33:51.665465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.278 [2024-11-04 12:33:51.665470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.278 [2024-11-04 12:33:51.665474] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.278 [2024-11-04 12:33:51.665485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.278 qpair failed and we were unable to recover it. 00:29:17.278 [2024-11-04 12:33:51.675427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.278 [2024-11-04 12:33:51.675476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.278 [2024-11-04 12:33:51.675498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.278 [2024-11-04 12:33:51.675504] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.278 [2024-11-04 12:33:51.675509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.278 [2024-11-04 12:33:51.675523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.278 qpair failed and we were unable to recover it. 00:29:17.278 [2024-11-04 12:33:51.685490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.278 [2024-11-04 12:33:51.685547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.278 [2024-11-04 12:33:51.685566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.278 [2024-11-04 12:33:51.685572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.278 [2024-11-04 12:33:51.685577] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.278 [2024-11-04 12:33:51.685591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.278 qpair failed and we were unable to recover it. 00:29:17.278 [2024-11-04 12:33:51.695405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.278 [2024-11-04 12:33:51.695496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.278 [2024-11-04 12:33:51.695507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.278 [2024-11-04 12:33:51.695513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.278 [2024-11-04 12:33:51.695517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.278 [2024-11-04 12:33:51.695529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.278 qpair failed and we were unable to recover it. 00:29:17.278 [2024-11-04 12:33:51.705425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.278 [2024-11-04 12:33:51.705483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.278 [2024-11-04 12:33:51.705494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.278 [2024-11-04 12:33:51.705499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.278 [2024-11-04 12:33:51.705503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.278 [2024-11-04 12:33:51.705514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.278 qpair failed and we were unable to recover it. 00:29:17.278 [2024-11-04 12:33:51.715583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.278 [2024-11-04 12:33:51.715641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.278 [2024-11-04 12:33:51.715652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.278 [2024-11-04 12:33:51.715657] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.278 [2024-11-04 12:33:51.715661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.278 [2024-11-04 12:33:51.715675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.278 qpair failed and we were unable to recover it. 00:29:17.278 [2024-11-04 12:33:51.725572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.278 [2024-11-04 12:33:51.725617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.278 [2024-11-04 12:33:51.725628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.278 [2024-11-04 12:33:51.725633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.278 [2024-11-04 12:33:51.725637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.278 [2024-11-04 12:33:51.725647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.278 qpair failed and we were unable to recover it. 00:29:17.278 [2024-11-04 12:33:51.735655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.278 [2024-11-04 12:33:51.735704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.278 [2024-11-04 12:33:51.735714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.278 [2024-11-04 12:33:51.735719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.278 [2024-11-04 12:33:51.735724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.278 [2024-11-04 12:33:51.735734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.278 qpair failed and we were unable to recover it. 00:29:17.278 [2024-11-04 12:33:51.745643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.279 [2024-11-04 12:33:51.745694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.279 [2024-11-04 12:33:51.745704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.279 [2024-11-04 12:33:51.745709] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.279 [2024-11-04 12:33:51.745713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.279 [2024-11-04 12:33:51.745724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.279 qpair failed and we were unable to recover it. 00:29:17.279 [2024-11-04 12:33:51.755663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.279 [2024-11-04 12:33:51.755707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.279 [2024-11-04 12:33:51.755717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.279 [2024-11-04 12:33:51.755722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.279 [2024-11-04 12:33:51.755727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.279 [2024-11-04 12:33:51.755737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.279 qpair failed and we were unable to recover it. 00:29:17.279 [2024-11-04 12:33:51.765695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.279 [2024-11-04 12:33:51.765748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.279 [2024-11-04 12:33:51.765761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.279 [2024-11-04 12:33:51.765766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.279 [2024-11-04 12:33:51.765770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.279 [2024-11-04 12:33:51.765781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.279 qpair failed and we were unable to recover it. 00:29:17.279 [2024-11-04 12:33:51.775743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.279 [2024-11-04 12:33:51.775827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.279 [2024-11-04 12:33:51.775837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.279 [2024-11-04 12:33:51.775842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.279 [2024-11-04 12:33:51.775846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.279 [2024-11-04 12:33:51.775856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.279 qpair failed and we were unable to recover it. 00:29:17.279 [2024-11-04 12:33:51.785730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.279 [2024-11-04 12:33:51.785785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.279 [2024-11-04 12:33:51.785795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.279 [2024-11-04 12:33:51.785800] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.279 [2024-11-04 12:33:51.785804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.279 [2024-11-04 12:33:51.785815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.279 qpair failed and we were unable to recover it. 00:29:17.279 [2024-11-04 12:33:51.795762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.279 [2024-11-04 12:33:51.795805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.279 [2024-11-04 12:33:51.795815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.279 [2024-11-04 12:33:51.795820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.279 [2024-11-04 12:33:51.795824] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.279 [2024-11-04 12:33:51.795835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.279 qpair failed and we were unable to recover it. 00:29:17.279 [2024-11-04 12:33:51.805818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.279 [2024-11-04 12:33:51.805905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.279 [2024-11-04 12:33:51.805916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.279 [2024-11-04 12:33:51.805920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.279 [2024-11-04 12:33:51.805927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.279 [2024-11-04 12:33:51.805938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.279 qpair failed and we were unable to recover it. 00:29:17.279 [2024-11-04 12:33:51.815891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.279 [2024-11-04 12:33:51.815943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.279 [2024-11-04 12:33:51.815953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.279 [2024-11-04 12:33:51.815958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.279 [2024-11-04 12:33:51.815963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.279 [2024-11-04 12:33:51.815974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.279 qpair failed and we were unable to recover it. 00:29:17.279 [2024-11-04 12:33:51.825890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.279 [2024-11-04 12:33:51.825942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.279 [2024-11-04 12:33:51.825953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.279 [2024-11-04 12:33:51.825958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.279 [2024-11-04 12:33:51.825962] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.279 [2024-11-04 12:33:51.825972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.279 qpair failed and we were unable to recover it. 00:29:17.279 [2024-11-04 12:33:51.835915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.279 [2024-11-04 12:33:51.836002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.279 [2024-11-04 12:33:51.836012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.279 [2024-11-04 12:33:51.836017] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.279 [2024-11-04 12:33:51.836021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.279 [2024-11-04 12:33:51.836032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.279 qpair failed and we were unable to recover it. 00:29:17.542 [2024-11-04 12:33:51.845951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.542 [2024-11-04 12:33:51.845994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.542 [2024-11-04 12:33:51.846003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.542 [2024-11-04 12:33:51.846008] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.542 [2024-11-04 12:33:51.846013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.542 [2024-11-04 12:33:51.846023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.542 qpair failed and we were unable to recover it. 00:29:17.542 [2024-11-04 12:33:51.855976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.542 [2024-11-04 12:33:51.856030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.542 [2024-11-04 12:33:51.856040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.542 [2024-11-04 12:33:51.856045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.542 [2024-11-04 12:33:51.856049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.542 [2024-11-04 12:33:51.856060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.542 qpair failed and we were unable to recover it. 00:29:17.542 [2024-11-04 12:33:51.866013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.542 [2024-11-04 12:33:51.866061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.542 [2024-11-04 12:33:51.866072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.542 [2024-11-04 12:33:51.866077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.542 [2024-11-04 12:33:51.866081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.542 [2024-11-04 12:33:51.866092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.542 qpair failed and we were unable to recover it. 00:29:17.542 [2024-11-04 12:33:51.876027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.542 [2024-11-04 12:33:51.876125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.542 [2024-11-04 12:33:51.876135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.542 [2024-11-04 12:33:51.876140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.542 [2024-11-04 12:33:51.876144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.542 [2024-11-04 12:33:51.876155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.542 qpair failed and we were unable to recover it. 00:29:17.542 [2024-11-04 12:33:51.886058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.542 [2024-11-04 12:33:51.886137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.542 [2024-11-04 12:33:51.886147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.542 [2024-11-04 12:33:51.886152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.542 [2024-11-04 12:33:51.886157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.542 [2024-11-04 12:33:51.886167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.542 qpair failed and we were unable to recover it. 00:29:17.542 [2024-11-04 12:33:51.896140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.542 [2024-11-04 12:33:51.896192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.542 [2024-11-04 12:33:51.896201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.542 [2024-11-04 12:33:51.896206] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.542 [2024-11-04 12:33:51.896217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.542 [2024-11-04 12:33:51.896227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.542 qpair failed and we were unable to recover it. 00:29:17.542 [2024-11-04 12:33:51.906100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.542 [2024-11-04 12:33:51.906151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.542 [2024-11-04 12:33:51.906161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.542 [2024-11-04 12:33:51.906165] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.542 [2024-11-04 12:33:51.906170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.542 [2024-11-04 12:33:51.906180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.542 qpair failed and we were unable to recover it. 00:29:17.542 [2024-11-04 12:33:51.916117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.542 [2024-11-04 12:33:51.916160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.542 [2024-11-04 12:33:51.916170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.542 [2024-11-04 12:33:51.916174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.542 [2024-11-04 12:33:51.916179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.542 [2024-11-04 12:33:51.916189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.542 qpair failed and we were unable to recover it. 00:29:17.542 [2024-11-04 12:33:51.926004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.542 [2024-11-04 12:33:51.926049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.542 [2024-11-04 12:33:51.926059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.542 [2024-11-04 12:33:51.926063] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.542 [2024-11-04 12:33:51.926068] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.542 [2024-11-04 12:33:51.926078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.542 qpair failed and we were unable to recover it. 00:29:17.542 [2024-11-04 12:33:51.936200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.542 [2024-11-04 12:33:51.936265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.542 [2024-11-04 12:33:51.936275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.542 [2024-11-04 12:33:51.936280] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.542 [2024-11-04 12:33:51.936284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.542 [2024-11-04 12:33:51.936294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.542 qpair failed and we were unable to recover it. 00:29:17.542 [2024-11-04 12:33:51.946233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.542 [2024-11-04 12:33:51.946280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.542 [2024-11-04 12:33:51.946290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.542 [2024-11-04 12:33:51.946295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.542 [2024-11-04 12:33:51.946299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.542 [2024-11-04 12:33:51.946309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.542 qpair failed and we were unable to recover it. 00:29:17.542 [2024-11-04 12:33:51.956236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.542 [2024-11-04 12:33:51.956288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.542 [2024-11-04 12:33:51.956298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.542 [2024-11-04 12:33:51.956303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.542 [2024-11-04 12:33:51.956307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.542 [2024-11-04 12:33:51.956318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.542 qpair failed and we were unable to recover it. 00:29:17.542 [2024-11-04 12:33:51.966232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-11-04 12:33:51.966276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-11-04 12:33:51.966286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-11-04 12:33:51.966291] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-11-04 12:33:51.966295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.543 [2024-11-04 12:33:51.966305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-11-04 12:33:51.976319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-11-04 12:33:51.976366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-11-04 12:33:51.976375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-11-04 12:33:51.976380] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-11-04 12:33:51.976385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.543 [2024-11-04 12:33:51.976395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-11-04 12:33:51.986335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-11-04 12:33:51.986384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-11-04 12:33:51.986393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-11-04 12:33:51.986401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-11-04 12:33:51.986405] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.543 [2024-11-04 12:33:51.986416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-11-04 12:33:51.996345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-11-04 12:33:51.996435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-11-04 12:33:51.996445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-11-04 12:33:51.996450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-11-04 12:33:51.996454] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.543 [2024-11-04 12:33:51.996465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-11-04 12:33:52.006347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-11-04 12:33:52.006387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-11-04 12:33:52.006397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-11-04 12:33:52.006402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-11-04 12:33:52.006407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.543 [2024-11-04 12:33:52.006417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-11-04 12:33:52.016413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-11-04 12:33:52.016465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-11-04 12:33:52.016475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-11-04 12:33:52.016480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-11-04 12:33:52.016484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.543 [2024-11-04 12:33:52.016495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-11-04 12:33:52.026469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-11-04 12:33:52.026521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-11-04 12:33:52.026531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-11-04 12:33:52.026536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-11-04 12:33:52.026541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.543 [2024-11-04 12:33:52.026551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-11-04 12:33:52.036432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-11-04 12:33:52.036522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-11-04 12:33:52.036541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-11-04 12:33:52.036547] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-11-04 12:33:52.036551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.543 [2024-11-04 12:33:52.036566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-11-04 12:33:52.046469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-11-04 12:33:52.046532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-11-04 12:33:52.046544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-11-04 12:33:52.046549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-11-04 12:33:52.046554] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.543 [2024-11-04 12:33:52.046565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-11-04 12:33:52.056527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-11-04 12:33:52.056632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-11-04 12:33:52.056643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-11-04 12:33:52.056648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-11-04 12:33:52.056653] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.543 [2024-11-04 12:33:52.056664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-11-04 12:33:52.066541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-11-04 12:33:52.066593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-11-04 12:33:52.066605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-11-04 12:33:52.066610] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-11-04 12:33:52.066614] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.543 [2024-11-04 12:33:52.066626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-11-04 12:33:52.076585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-11-04 12:33:52.076639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-11-04 12:33:52.076658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-11-04 12:33:52.076668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-11-04 12:33:52.076673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.543 [2024-11-04 12:33:52.076687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-11-04 12:33:52.086588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-11-04 12:33:52.086652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-11-04 12:33:52.086664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-11-04 12:33:52.086670] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.543 [2024-11-04 12:33:52.086674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.543 [2024-11-04 12:33:52.086685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.543 qpair failed and we were unable to recover it. 00:29:17.543 [2024-11-04 12:33:52.096625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.543 [2024-11-04 12:33:52.096678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.543 [2024-11-04 12:33:52.096689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.543 [2024-11-04 12:33:52.096694] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.544 [2024-11-04 12:33:52.096698] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.544 [2024-11-04 12:33:52.096709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.544 qpair failed and we were unable to recover it. 00:29:17.544 [2024-11-04 12:33:52.106723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.544 [2024-11-04 12:33:52.106776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.544 [2024-11-04 12:33:52.106786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.544 [2024-11-04 12:33:52.106791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.544 [2024-11-04 12:33:52.106796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.544 [2024-11-04 12:33:52.106806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.544 qpair failed and we were unable to recover it. 00:29:17.805 [2024-11-04 12:33:52.116694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.805 [2024-11-04 12:33:52.116740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.805 [2024-11-04 12:33:52.116757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.805 [2024-11-04 12:33:52.116765] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.805 [2024-11-04 12:33:52.116770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.805 [2024-11-04 12:33:52.116781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.805 qpair failed and we were unable to recover it. 00:29:17.805 [2024-11-04 12:33:52.126670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.805 [2024-11-04 12:33:52.126716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.805 [2024-11-04 12:33:52.126727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.805 [2024-11-04 12:33:52.126732] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.805 [2024-11-04 12:33:52.126736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.805 [2024-11-04 12:33:52.126751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.805 qpair failed and we were unable to recover it. 00:29:17.805 [2024-11-04 12:33:52.136710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.805 [2024-11-04 12:33:52.136781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.805 [2024-11-04 12:33:52.136794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.805 [2024-11-04 12:33:52.136800] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.805 [2024-11-04 12:33:52.136806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.805 [2024-11-04 12:33:52.136817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.805 qpair failed and we were unable to recover it. 00:29:17.805 [2024-11-04 12:33:52.146785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.805 [2024-11-04 12:33:52.146837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.805 [2024-11-04 12:33:52.146848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.805 [2024-11-04 12:33:52.146853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.805 [2024-11-04 12:33:52.146857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.805 [2024-11-04 12:33:52.146868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.805 qpair failed and we were unable to recover it. 00:29:17.805 [2024-11-04 12:33:52.156792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.805 [2024-11-04 12:33:52.156843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.805 [2024-11-04 12:33:52.156854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.805 [2024-11-04 12:33:52.156859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.805 [2024-11-04 12:33:52.156864] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.805 [2024-11-04 12:33:52.156875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.805 qpair failed and we were unable to recover it. 00:29:17.805 [2024-11-04 12:33:52.166662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.805 [2024-11-04 12:33:52.166704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.805 [2024-11-04 12:33:52.166717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.805 [2024-11-04 12:33:52.166723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.805 [2024-11-04 12:33:52.166727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.805 [2024-11-04 12:33:52.166738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.805 qpair failed and we were unable to recover it. 00:29:17.805 [2024-11-04 12:33:52.176844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.805 [2024-11-04 12:33:52.176940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.805 [2024-11-04 12:33:52.176950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.806 [2024-11-04 12:33:52.176955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.806 [2024-11-04 12:33:52.176960] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.806 [2024-11-04 12:33:52.176970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.806 qpair failed and we were unable to recover it. 00:29:17.806 [2024-11-04 12:33:52.186902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.806 [2024-11-04 12:33:52.186959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.806 [2024-11-04 12:33:52.186969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.806 [2024-11-04 12:33:52.186974] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.806 [2024-11-04 12:33:52.186978] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.806 [2024-11-04 12:33:52.186988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.806 qpair failed and we were unable to recover it. 00:29:17.806 [2024-11-04 12:33:52.196874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.806 [2024-11-04 12:33:52.196923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.806 [2024-11-04 12:33:52.196933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.806 [2024-11-04 12:33:52.196938] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.806 [2024-11-04 12:33:52.196942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.806 [2024-11-04 12:33:52.196953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.806 qpair failed and we were unable to recover it. 00:29:17.806 [2024-11-04 12:33:52.206900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.806 [2024-11-04 12:33:52.206942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.806 [2024-11-04 12:33:52.206951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.806 [2024-11-04 12:33:52.206956] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.806 [2024-11-04 12:33:52.206961] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.806 [2024-11-04 12:33:52.206974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.806 qpair failed and we were unable to recover it. 00:29:17.806 [2024-11-04 12:33:52.216950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.806 [2024-11-04 12:33:52.216998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.806 [2024-11-04 12:33:52.217008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.806 [2024-11-04 12:33:52.217013] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.806 [2024-11-04 12:33:52.217018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.806 [2024-11-04 12:33:52.217028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.806 qpair failed and we were unable to recover it. 00:29:17.806 [2024-11-04 12:33:52.226978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.806 [2024-11-04 12:33:52.227028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.806 [2024-11-04 12:33:52.227038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.806 [2024-11-04 12:33:52.227043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.806 [2024-11-04 12:33:52.227047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.806 [2024-11-04 12:33:52.227058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.806 qpair failed and we were unable to recover it. 00:29:17.806 [2024-11-04 12:33:52.236990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.806 [2024-11-04 12:33:52.237034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.806 [2024-11-04 12:33:52.237044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.806 [2024-11-04 12:33:52.237049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.806 [2024-11-04 12:33:52.237053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.806 [2024-11-04 12:33:52.237064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.806 qpair failed and we were unable to recover it. 00:29:17.806 [2024-11-04 12:33:52.247008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.806 [2024-11-04 12:33:52.247095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.806 [2024-11-04 12:33:52.247104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.806 [2024-11-04 12:33:52.247109] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.806 [2024-11-04 12:33:52.247114] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.806 [2024-11-04 12:33:52.247124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.806 qpair failed and we were unable to recover it. 00:29:17.806 [2024-11-04 12:33:52.257093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.806 [2024-11-04 12:33:52.257147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.806 [2024-11-04 12:33:52.257160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.806 [2024-11-04 12:33:52.257165] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.806 [2024-11-04 12:33:52.257169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.806 [2024-11-04 12:33:52.257180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.806 qpair failed and we were unable to recover it. 00:29:17.806 [2024-11-04 12:33:52.267012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.806 [2024-11-04 12:33:52.267067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.806 [2024-11-04 12:33:52.267082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.806 [2024-11-04 12:33:52.267087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.806 [2024-11-04 12:33:52.267092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.806 [2024-11-04 12:33:52.267105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.806 qpair failed and we were unable to recover it. 00:29:17.806 [2024-11-04 12:33:52.277147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.806 [2024-11-04 12:33:52.277197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.806 [2024-11-04 12:33:52.277207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.806 [2024-11-04 12:33:52.277212] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.806 [2024-11-04 12:33:52.277217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.806 [2024-11-04 12:33:52.277227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.806 qpair failed and we were unable to recover it. 00:29:17.806 [2024-11-04 12:33:52.287120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.806 [2024-11-04 12:33:52.287160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.806 [2024-11-04 12:33:52.287170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.806 [2024-11-04 12:33:52.287174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.806 [2024-11-04 12:33:52.287179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.806 [2024-11-04 12:33:52.287189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.806 qpair failed and we were unable to recover it. 00:29:17.806 [2024-11-04 12:33:52.297188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.806 [2024-11-04 12:33:52.297235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.806 [2024-11-04 12:33:52.297245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.806 [2024-11-04 12:33:52.297250] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.806 [2024-11-04 12:33:52.297254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.806 [2024-11-04 12:33:52.297267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.806 qpair failed and we were unable to recover it. 00:29:17.806 [2024-11-04 12:33:52.307322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.807 [2024-11-04 12:33:52.307378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.807 [2024-11-04 12:33:52.307389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.807 [2024-11-04 12:33:52.307394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.807 [2024-11-04 12:33:52.307399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.807 [2024-11-04 12:33:52.307410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.807 qpair failed and we were unable to recover it. 00:29:17.807 [2024-11-04 12:33:52.317302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.807 [2024-11-04 12:33:52.317355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.807 [2024-11-04 12:33:52.317365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.807 [2024-11-04 12:33:52.317370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.807 [2024-11-04 12:33:52.317374] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.807 [2024-11-04 12:33:52.317385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.807 qpair failed and we were unable to recover it. 00:29:17.807 [2024-11-04 12:33:52.327236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.807 [2024-11-04 12:33:52.327278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.807 [2024-11-04 12:33:52.327288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.807 [2024-11-04 12:33:52.327293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.807 [2024-11-04 12:33:52.327298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.807 [2024-11-04 12:33:52.327308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.807 qpair failed and we were unable to recover it. 00:29:17.807 [2024-11-04 12:33:52.337327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.807 [2024-11-04 12:33:52.337379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.807 [2024-11-04 12:33:52.337389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.807 [2024-11-04 12:33:52.337394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.807 [2024-11-04 12:33:52.337398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.807 [2024-11-04 12:33:52.337409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.807 qpair failed and we were unable to recover it. 00:29:17.807 [2024-11-04 12:33:52.347348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.807 [2024-11-04 12:33:52.347398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.807 [2024-11-04 12:33:52.347411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.807 [2024-11-04 12:33:52.347416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.807 [2024-11-04 12:33:52.347421] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.807 [2024-11-04 12:33:52.347431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.807 qpair failed and we were unable to recover it. 00:29:17.807 [2024-11-04 12:33:52.357336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.807 [2024-11-04 12:33:52.357387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.807 [2024-11-04 12:33:52.357396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.807 [2024-11-04 12:33:52.357401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.807 [2024-11-04 12:33:52.357406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.807 [2024-11-04 12:33:52.357416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.807 qpair failed and we were unable to recover it. 00:29:17.807 [2024-11-04 12:33:52.367350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.807 [2024-11-04 12:33:52.367394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.807 [2024-11-04 12:33:52.367404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.807 [2024-11-04 12:33:52.367409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.807 [2024-11-04 12:33:52.367413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:17.807 [2024-11-04 12:33:52.367424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.807 qpair failed and we were unable to recover it. 00:29:18.071 [2024-11-04 12:33:52.377296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.071 [2024-11-04 12:33:52.377355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.071 [2024-11-04 12:33:52.377365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.071 [2024-11-04 12:33:52.377370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.071 [2024-11-04 12:33:52.377375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.071 [2024-11-04 12:33:52.377385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.071 qpair failed and we were unable to recover it. 00:29:18.071 [2024-11-04 12:33:52.387466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.071 [2024-11-04 12:33:52.387519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.071 [2024-11-04 12:33:52.387530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.071 [2024-11-04 12:33:52.387535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.071 [2024-11-04 12:33:52.387542] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.071 [2024-11-04 12:33:52.387553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.071 qpair failed and we were unable to recover it. 00:29:18.071 [2024-11-04 12:33:52.397490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.071 [2024-11-04 12:33:52.397543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.071 [2024-11-04 12:33:52.397553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.071 [2024-11-04 12:33:52.397558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.071 [2024-11-04 12:33:52.397562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.071 [2024-11-04 12:33:52.397573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.071 qpair failed and we were unable to recover it. 00:29:18.071 [2024-11-04 12:33:52.407443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.071 [2024-11-04 12:33:52.407491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.071 [2024-11-04 12:33:52.407510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.071 [2024-11-04 12:33:52.407516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.071 [2024-11-04 12:33:52.407521] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.071 [2024-11-04 12:33:52.407535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.071 qpair failed and we were unable to recover it. 00:29:18.071 [2024-11-04 12:33:52.417538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.071 [2024-11-04 12:33:52.417590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.071 [2024-11-04 12:33:52.417602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.071 [2024-11-04 12:33:52.417607] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.071 [2024-11-04 12:33:52.417612] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.071 [2024-11-04 12:33:52.417624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.071 qpair failed and we were unable to recover it. 00:29:18.071 [2024-11-04 12:33:52.427574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.071 [2024-11-04 12:33:52.427638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.071 [2024-11-04 12:33:52.427658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.071 [2024-11-04 12:33:52.427664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.071 [2024-11-04 12:33:52.427669] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.071 [2024-11-04 12:33:52.427683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.071 qpair failed and we were unable to recover it. 00:29:18.071 [2024-11-04 12:33:52.437586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.071 [2024-11-04 12:33:52.437642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.071 [2024-11-04 12:33:52.437653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.071 [2024-11-04 12:33:52.437659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.071 [2024-11-04 12:33:52.437663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.071 [2024-11-04 12:33:52.437675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.071 qpair failed and we were unable to recover it. 00:29:18.071 [2024-11-04 12:33:52.447580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.071 [2024-11-04 12:33:52.447620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.071 [2024-11-04 12:33:52.447630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.072 [2024-11-04 12:33:52.447636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.072 [2024-11-04 12:33:52.447641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.072 [2024-11-04 12:33:52.447651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.072 qpair failed and we were unable to recover it. 00:29:18.072 [2024-11-04 12:33:52.457666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.072 [2024-11-04 12:33:52.457720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.072 [2024-11-04 12:33:52.457730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.072 [2024-11-04 12:33:52.457735] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.072 [2024-11-04 12:33:52.457740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.072 [2024-11-04 12:33:52.457754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.072 qpair failed and we were unable to recover it. 00:29:18.072 [2024-11-04 12:33:52.467704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.072 [2024-11-04 12:33:52.467791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.072 [2024-11-04 12:33:52.467802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.072 [2024-11-04 12:33:52.467807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.072 [2024-11-04 12:33:52.467811] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.072 [2024-11-04 12:33:52.467822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.072 qpair failed and we were unable to recover it. 00:29:18.072 [2024-11-04 12:33:52.477702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.072 [2024-11-04 12:33:52.477751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.072 [2024-11-04 12:33:52.477762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.072 [2024-11-04 12:33:52.477770] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.072 [2024-11-04 12:33:52.477775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.072 [2024-11-04 12:33:52.477786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.072 qpair failed and we were unable to recover it. 00:29:18.072 [2024-11-04 12:33:52.487702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.072 [2024-11-04 12:33:52.487754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.072 [2024-11-04 12:33:52.487765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.072 [2024-11-04 12:33:52.487771] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.072 [2024-11-04 12:33:52.487775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.072 [2024-11-04 12:33:52.487786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.072 qpair failed and we were unable to recover it. 00:29:18.072 [2024-11-04 12:33:52.497783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.072 [2024-11-04 12:33:52.497835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.072 [2024-11-04 12:33:52.497845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.072 [2024-11-04 12:33:52.497850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.072 [2024-11-04 12:33:52.497855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.072 [2024-11-04 12:33:52.497866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.072 qpair failed and we were unable to recover it. 00:29:18.072 [2024-11-04 12:33:52.507703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.072 [2024-11-04 12:33:52.507756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.072 [2024-11-04 12:33:52.507766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.072 [2024-11-04 12:33:52.507772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.072 [2024-11-04 12:33:52.507777] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.072 [2024-11-04 12:33:52.507787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.072 qpair failed and we were unable to recover it. 00:29:18.072 [2024-11-04 12:33:52.517771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.072 [2024-11-04 12:33:52.517820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.072 [2024-11-04 12:33:52.517834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.072 [2024-11-04 12:33:52.517840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.072 [2024-11-04 12:33:52.517844] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.072 [2024-11-04 12:33:52.517857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.072 qpair failed and we were unable to recover it. 00:29:18.072 [2024-11-04 12:33:52.527776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.072 [2024-11-04 12:33:52.527818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.072 [2024-11-04 12:33:52.527829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.072 [2024-11-04 12:33:52.527834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.072 [2024-11-04 12:33:52.527839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.072 [2024-11-04 12:33:52.527850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.072 qpair failed and we were unable to recover it. 00:29:18.072 [2024-11-04 12:33:52.537858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.072 [2024-11-04 12:33:52.537912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.072 [2024-11-04 12:33:52.537922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.072 [2024-11-04 12:33:52.537927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.072 [2024-11-04 12:33:52.537932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.072 [2024-11-04 12:33:52.537943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.072 qpair failed and we were unable to recover it. 00:29:18.072 [2024-11-04 12:33:52.547805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.072 [2024-11-04 12:33:52.547863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.072 [2024-11-04 12:33:52.547875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.072 [2024-11-04 12:33:52.547880] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.072 [2024-11-04 12:33:52.547885] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.072 [2024-11-04 12:33:52.547896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.072 qpair failed and we were unable to recover it. 00:29:18.072 [2024-11-04 12:33:52.557951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.072 [2024-11-04 12:33:52.558053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.072 [2024-11-04 12:33:52.558064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.072 [2024-11-04 12:33:52.558070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.072 [2024-11-04 12:33:52.558075] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.072 [2024-11-04 12:33:52.558085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.072 qpair failed and we were unable to recover it. 00:29:18.072 [2024-11-04 12:33:52.567807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.072 [2024-11-04 12:33:52.567855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.072 [2024-11-04 12:33:52.567865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.072 [2024-11-04 12:33:52.567874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.072 [2024-11-04 12:33:52.567879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.072 [2024-11-04 12:33:52.567890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.072 qpair failed and we were unable to recover it. 00:29:18.072 [2024-11-04 12:33:52.578028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.072 [2024-11-04 12:33:52.578119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.072 [2024-11-04 12:33:52.578129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.072 [2024-11-04 12:33:52.578134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.072 [2024-11-04 12:33:52.578139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.072 [2024-11-04 12:33:52.578150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.073 qpair failed and we were unable to recover it. 00:29:18.073 [2024-11-04 12:33:52.588038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.073 [2024-11-04 12:33:52.588092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.073 [2024-11-04 12:33:52.588102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.073 [2024-11-04 12:33:52.588107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.073 [2024-11-04 12:33:52.588112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.073 [2024-11-04 12:33:52.588123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.073 qpair failed and we were unable to recover it. 00:29:18.073 [2024-11-04 12:33:52.597938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.073 [2024-11-04 12:33:52.597992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.073 [2024-11-04 12:33:52.598002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.073 [2024-11-04 12:33:52.598008] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.073 [2024-11-04 12:33:52.598012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.073 [2024-11-04 12:33:52.598023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.073 qpair failed and we were unable to recover it. 00:29:18.073 [2024-11-04 12:33:52.608020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.073 [2024-11-04 12:33:52.608082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.073 [2024-11-04 12:33:52.608091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.073 [2024-11-04 12:33:52.608096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.073 [2024-11-04 12:33:52.608101] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.073 [2024-11-04 12:33:52.608112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.073 qpair failed and we were unable to recover it. 00:29:18.073 [2024-11-04 12:33:52.618130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.073 [2024-11-04 12:33:52.618181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.073 [2024-11-04 12:33:52.618191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.073 [2024-11-04 12:33:52.618197] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.073 [2024-11-04 12:33:52.618202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.073 [2024-11-04 12:33:52.618212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.073 qpair failed and we were unable to recover it. 00:29:18.073 [2024-11-04 12:33:52.628158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.073 [2024-11-04 12:33:52.628238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.073 [2024-11-04 12:33:52.628248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.073 [2024-11-04 12:33:52.628253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.073 [2024-11-04 12:33:52.628257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.073 [2024-11-04 12:33:52.628268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.073 qpair failed and we were unable to recover it. 00:29:18.073 [2024-11-04 12:33:52.638175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.334 [2024-11-04 12:33:52.638228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.334 [2024-11-04 12:33:52.638238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.334 [2024-11-04 12:33:52.638246] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.334 [2024-11-04 12:33:52.638252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.334 [2024-11-04 12:33:52.638262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-11-04 12:33:52.648177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.334 [2024-11-04 12:33:52.648220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.334 [2024-11-04 12:33:52.648230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.334 [2024-11-04 12:33:52.648235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.334 [2024-11-04 12:33:52.648239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.334 [2024-11-04 12:33:52.648250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-11-04 12:33:52.658195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.334 [2024-11-04 12:33:52.658274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.334 [2024-11-04 12:33:52.658287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.334 [2024-11-04 12:33:52.658293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.334 [2024-11-04 12:33:52.658297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.334 [2024-11-04 12:33:52.658309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-11-04 12:33:52.668274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.334 [2024-11-04 12:33:52.668325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.334 [2024-11-04 12:33:52.668335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.334 [2024-11-04 12:33:52.668340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.334 [2024-11-04 12:33:52.668344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.334 [2024-11-04 12:33:52.668355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-11-04 12:33:52.678289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.334 [2024-11-04 12:33:52.678336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.334 [2024-11-04 12:33:52.678346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.334 [2024-11-04 12:33:52.678351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.334 [2024-11-04 12:33:52.678356] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.334 [2024-11-04 12:33:52.678367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-11-04 12:33:52.688271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.334 [2024-11-04 12:33:52.688316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.334 [2024-11-04 12:33:52.688326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.334 [2024-11-04 12:33:52.688331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.334 [2024-11-04 12:33:52.688336] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.334 [2024-11-04 12:33:52.688347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-11-04 12:33:52.698345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.334 [2024-11-04 12:33:52.698435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.334 [2024-11-04 12:33:52.698445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.334 [2024-11-04 12:33:52.698450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.334 [2024-11-04 12:33:52.698456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.334 [2024-11-04 12:33:52.698469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-11-04 12:33:52.708269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.334 [2024-11-04 12:33:52.708323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.334 [2024-11-04 12:33:52.708333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.334 [2024-11-04 12:33:52.708338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.334 [2024-11-04 12:33:52.708343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.334 [2024-11-04 12:33:52.708353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-11-04 12:33:52.718324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.334 [2024-11-04 12:33:52.718376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.334 [2024-11-04 12:33:52.718386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.334 [2024-11-04 12:33:52.718391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.334 [2024-11-04 12:33:52.718396] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.334 [2024-11-04 12:33:52.718406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-11-04 12:33:52.728395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.334 [2024-11-04 12:33:52.728478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.334 [2024-11-04 12:33:52.728488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.334 [2024-11-04 12:33:52.728493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.334 [2024-11-04 12:33:52.728499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.334 [2024-11-04 12:33:52.728509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-11-04 12:33:52.738449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.334 [2024-11-04 12:33:52.738500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.334 [2024-11-04 12:33:52.738510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.334 [2024-11-04 12:33:52.738515] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.334 [2024-11-04 12:33:52.738520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.334 [2024-11-04 12:33:52.738531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-11-04 12:33:52.748464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.334 [2024-11-04 12:33:52.748519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.334 [2024-11-04 12:33:52.748531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.334 [2024-11-04 12:33:52.748536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.334 [2024-11-04 12:33:52.748541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.334 [2024-11-04 12:33:52.748552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-11-04 12:33:52.758522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.334 [2024-11-04 12:33:52.758571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.334 [2024-11-04 12:33:52.758581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.334 [2024-11-04 12:33:52.758586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.334 [2024-11-04 12:33:52.758591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.334 [2024-11-04 12:33:52.758602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.334 qpair failed and we were unable to recover it. 00:29:18.334 [2024-11-04 12:33:52.768535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.335 [2024-11-04 12:33:52.768612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.335 [2024-11-04 12:33:52.768625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.335 [2024-11-04 12:33:52.768631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.335 [2024-11-04 12:33:52.768636] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.335 [2024-11-04 12:33:52.768649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-11-04 12:33:52.778462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.335 [2024-11-04 12:33:52.778515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.335 [2024-11-04 12:33:52.778526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.335 [2024-11-04 12:33:52.778531] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.335 [2024-11-04 12:33:52.778536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.335 [2024-11-04 12:33:52.778546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-11-04 12:33:52.788609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.335 [2024-11-04 12:33:52.788695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.335 [2024-11-04 12:33:52.788704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.335 [2024-11-04 12:33:52.788710] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.335 [2024-11-04 12:33:52.788715] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.335 [2024-11-04 12:33:52.788729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-11-04 12:33:52.798604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.335 [2024-11-04 12:33:52.798651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.335 [2024-11-04 12:33:52.798662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.335 [2024-11-04 12:33:52.798667] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.335 [2024-11-04 12:33:52.798672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.335 [2024-11-04 12:33:52.798682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-11-04 12:33:52.808609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.335 [2024-11-04 12:33:52.808651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.335 [2024-11-04 12:33:52.808661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.335 [2024-11-04 12:33:52.808666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.335 [2024-11-04 12:33:52.808671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.335 [2024-11-04 12:33:52.808681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-11-04 12:33:52.818676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.335 [2024-11-04 12:33:52.818750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.335 [2024-11-04 12:33:52.818760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.335 [2024-11-04 12:33:52.818765] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.335 [2024-11-04 12:33:52.818770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.335 [2024-11-04 12:33:52.818781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-11-04 12:33:52.828706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.335 [2024-11-04 12:33:52.828759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.335 [2024-11-04 12:33:52.828769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.335 [2024-11-04 12:33:52.828775] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.335 [2024-11-04 12:33:52.828779] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.335 [2024-11-04 12:33:52.828791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-11-04 12:33:52.838756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.335 [2024-11-04 12:33:52.838842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.335 [2024-11-04 12:33:52.838855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.335 [2024-11-04 12:33:52.838860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.335 [2024-11-04 12:33:52.838865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.335 [2024-11-04 12:33:52.838876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-11-04 12:33:52.848721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.335 [2024-11-04 12:33:52.848773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.335 [2024-11-04 12:33:52.848783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.335 [2024-11-04 12:33:52.848788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.335 [2024-11-04 12:33:52.848793] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.335 [2024-11-04 12:33:52.848804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-11-04 12:33:52.858780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.335 [2024-11-04 12:33:52.858831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.335 [2024-11-04 12:33:52.858841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.335 [2024-11-04 12:33:52.858846] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.335 [2024-11-04 12:33:52.858851] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.335 [2024-11-04 12:33:52.858862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-11-04 12:33:52.868822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.335 [2024-11-04 12:33:52.868876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.335 [2024-11-04 12:33:52.868885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.335 [2024-11-04 12:33:52.868891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.335 [2024-11-04 12:33:52.868895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.335 [2024-11-04 12:33:52.868906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-11-04 12:33:52.878894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.335 [2024-11-04 12:33:52.878977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.335 [2024-11-04 12:33:52.878986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.335 [2024-11-04 12:33:52.878991] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.335 [2024-11-04 12:33:52.878999] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.335 [2024-11-04 12:33:52.879010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-11-04 12:33:52.888823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.335 [2024-11-04 12:33:52.888871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.335 [2024-11-04 12:33:52.888881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.335 [2024-11-04 12:33:52.888886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.335 [2024-11-04 12:33:52.888891] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.335 [2024-11-04 12:33:52.888901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.335 qpair failed and we were unable to recover it. 00:29:18.335 [2024-11-04 12:33:52.898927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.335 [2024-11-04 12:33:52.898983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.335 [2024-11-04 12:33:52.898993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.335 [2024-11-04 12:33:52.898998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.335 [2024-11-04 12:33:52.899003] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.336 [2024-11-04 12:33:52.899014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.336 qpair failed and we were unable to recover it. 00:29:18.598 [2024-11-04 12:33:52.908942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.598 [2024-11-04 12:33:52.908994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.598 [2024-11-04 12:33:52.909004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.598 [2024-11-04 12:33:52.909009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.598 [2024-11-04 12:33:52.909014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.598 [2024-11-04 12:33:52.909025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.598 qpair failed and we were unable to recover it. 00:29:18.598 [2024-11-04 12:33:52.919012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.598 [2024-11-04 12:33:52.919062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.598 [2024-11-04 12:33:52.919072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.598 [2024-11-04 12:33:52.919077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.598 [2024-11-04 12:33:52.919082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.598 [2024-11-04 12:33:52.919092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.598 qpair failed and we were unable to recover it. 00:29:18.598 [2024-11-04 12:33:52.928917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.598 [2024-11-04 12:33:52.928966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.598 [2024-11-04 12:33:52.928976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.598 [2024-11-04 12:33:52.928981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.598 [2024-11-04 12:33:52.928986] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.598 [2024-11-04 12:33:52.928996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.598 qpair failed and we were unable to recover it. 00:29:18.598 [2024-11-04 12:33:52.939087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.598 [2024-11-04 12:33:52.939142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.598 [2024-11-04 12:33:52.939152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.598 [2024-11-04 12:33:52.939157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.598 [2024-11-04 12:33:52.939162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.598 [2024-11-04 12:33:52.939172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.598 qpair failed and we were unable to recover it. 00:29:18.598 [2024-11-04 12:33:52.949039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.598 [2024-11-04 12:33:52.949091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.598 [2024-11-04 12:33:52.949101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.598 [2024-11-04 12:33:52.949106] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.598 [2024-11-04 12:33:52.949111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.598 [2024-11-04 12:33:52.949121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.598 qpair failed and we were unable to recover it. 00:29:18.598 [2024-11-04 12:33:52.959092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.598 [2024-11-04 12:33:52.959137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.598 [2024-11-04 12:33:52.959147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.598 [2024-11-04 12:33:52.959152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.598 [2024-11-04 12:33:52.959157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.598 [2024-11-04 12:33:52.959168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.598 qpair failed and we were unable to recover it. 00:29:18.598 [2024-11-04 12:33:52.969073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.598 [2024-11-04 12:33:52.969113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.598 [2024-11-04 12:33:52.969123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.598 [2024-11-04 12:33:52.969128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.598 [2024-11-04 12:33:52.969135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.598 [2024-11-04 12:33:52.969146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.598 qpair failed and we were unable to recover it. 00:29:18.598 [2024-11-04 12:33:52.979125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.598 [2024-11-04 12:33:52.979175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.598 [2024-11-04 12:33:52.979184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.598 [2024-11-04 12:33:52.979190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.598 [2024-11-04 12:33:52.979194] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.598 [2024-11-04 12:33:52.979205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.598 qpair failed and we were unable to recover it. 00:29:18.598 [2024-11-04 12:33:52.989143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.598 [2024-11-04 12:33:52.989197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.598 [2024-11-04 12:33:52.989206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.598 [2024-11-04 12:33:52.989212] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.598 [2024-11-04 12:33:52.989216] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.598 [2024-11-04 12:33:52.989227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.598 qpair failed and we were unable to recover it. 00:29:18.598 [2024-11-04 12:33:52.999186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.598 [2024-11-04 12:33:52.999264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.598 [2024-11-04 12:33:52.999274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.598 [2024-11-04 12:33:52.999279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.598 [2024-11-04 12:33:52.999284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.598 [2024-11-04 12:33:52.999294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.598 qpair failed and we were unable to recover it. 00:29:18.598 [2024-11-04 12:33:53.009018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.598 [2024-11-04 12:33:53.009061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.598 [2024-11-04 12:33:53.009071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.598 [2024-11-04 12:33:53.009076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.598 [2024-11-04 12:33:53.009081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.598 [2024-11-04 12:33:53.009091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.598 qpair failed and we were unable to recover it. 00:29:18.599 [2024-11-04 12:33:53.019263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.599 [2024-11-04 12:33:53.019370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.599 [2024-11-04 12:33:53.019384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.599 [2024-11-04 12:33:53.019390] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.599 [2024-11-04 12:33:53.019395] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.599 [2024-11-04 12:33:53.019407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.599 qpair failed and we were unable to recover it. 00:29:18.599 [2024-11-04 12:33:53.029264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.599 [2024-11-04 12:33:53.029350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.599 [2024-11-04 12:33:53.029360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.599 [2024-11-04 12:33:53.029365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.599 [2024-11-04 12:33:53.029371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.599 [2024-11-04 12:33:53.029382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.599 qpair failed and we were unable to recover it. 00:29:18.599 [2024-11-04 12:33:53.039157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.599 [2024-11-04 12:33:53.039207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.599 [2024-11-04 12:33:53.039217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.599 [2024-11-04 12:33:53.039222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.599 [2024-11-04 12:33:53.039227] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.599 [2024-11-04 12:33:53.039238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.599 qpair failed and we were unable to recover it. 00:29:18.599 [2024-11-04 12:33:53.049251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.599 [2024-11-04 12:33:53.049295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.599 [2024-11-04 12:33:53.049305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.599 [2024-11-04 12:33:53.049311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.599 [2024-11-04 12:33:53.049315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.599 [2024-11-04 12:33:53.049326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.599 qpair failed and we were unable to recover it. 00:29:18.599 [2024-11-04 12:33:53.059352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.599 [2024-11-04 12:33:53.059403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.599 [2024-11-04 12:33:53.059413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.599 [2024-11-04 12:33:53.059421] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.599 [2024-11-04 12:33:53.059425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.599 [2024-11-04 12:33:53.059436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.599 qpair failed and we were unable to recover it. 00:29:18.599 [2024-11-04 12:33:53.069250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.599 [2024-11-04 12:33:53.069308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.599 [2024-11-04 12:33:53.069317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.599 [2024-11-04 12:33:53.069322] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.599 [2024-11-04 12:33:53.069327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.599 [2024-11-04 12:33:53.069337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.599 qpair failed and we were unable to recover it. 00:29:18.599 [2024-11-04 12:33:53.079396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.599 [2024-11-04 12:33:53.079444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.599 [2024-11-04 12:33:53.079455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.599 [2024-11-04 12:33:53.079460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.599 [2024-11-04 12:33:53.079464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.599 [2024-11-04 12:33:53.079475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.599 qpair failed and we were unable to recover it. 00:29:18.599 [2024-11-04 12:33:53.089387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.599 [2024-11-04 12:33:53.089433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.599 [2024-11-04 12:33:53.089442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.599 [2024-11-04 12:33:53.089447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.599 [2024-11-04 12:33:53.089452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.599 [2024-11-04 12:33:53.089462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.599 qpair failed and we were unable to recover it. 00:29:18.599 [2024-11-04 12:33:53.099436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.599 [2024-11-04 12:33:53.099488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.599 [2024-11-04 12:33:53.099498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.599 [2024-11-04 12:33:53.099503] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.599 [2024-11-04 12:33:53.099508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.599 [2024-11-04 12:33:53.099519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.599 qpair failed and we were unable to recover it. 00:29:18.599 [2024-11-04 12:33:53.109370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.599 [2024-11-04 12:33:53.109422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.599 [2024-11-04 12:33:53.109432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.599 [2024-11-04 12:33:53.109437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.599 [2024-11-04 12:33:53.109441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.599 [2024-11-04 12:33:53.109452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.599 qpair failed and we were unable to recover it. 00:29:18.599 [2024-11-04 12:33:53.119501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.599 [2024-11-04 12:33:53.119552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.599 [2024-11-04 12:33:53.119562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.599 [2024-11-04 12:33:53.119568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.599 [2024-11-04 12:33:53.119573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.599 [2024-11-04 12:33:53.119583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.599 qpair failed and we were unable to recover it. 00:29:18.599 [2024-11-04 12:33:53.129504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.599 [2024-11-04 12:33:53.129544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.599 [2024-11-04 12:33:53.129554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.599 [2024-11-04 12:33:53.129560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.599 [2024-11-04 12:33:53.129564] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.599 [2024-11-04 12:33:53.129575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.599 qpair failed and we were unable to recover it. 00:29:18.599 [2024-11-04 12:33:53.139536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.599 [2024-11-04 12:33:53.139590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.599 [2024-11-04 12:33:53.139600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.599 [2024-11-04 12:33:53.139605] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.599 [2024-11-04 12:33:53.139610] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.599 [2024-11-04 12:33:53.139620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.599 qpair failed and we were unable to recover it. 00:29:18.599 [2024-11-04 12:33:53.149552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.599 [2024-11-04 12:33:53.149607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.599 [2024-11-04 12:33:53.149617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.599 [2024-11-04 12:33:53.149625] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.599 [2024-11-04 12:33:53.149629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.599 [2024-11-04 12:33:53.149640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.599 qpair failed and we were unable to recover it. 00:29:18.599 [2024-11-04 12:33:53.159630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.599 [2024-11-04 12:33:53.159674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.599 [2024-11-04 12:33:53.159684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.599 [2024-11-04 12:33:53.159689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.599 [2024-11-04 12:33:53.159694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.599 [2024-11-04 12:33:53.159704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.599 qpair failed and we were unable to recover it. 00:29:18.862 [2024-11-04 12:33:53.169619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.862 [2024-11-04 12:33:53.169673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.862 [2024-11-04 12:33:53.169683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.862 [2024-11-04 12:33:53.169688] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.862 [2024-11-04 12:33:53.169693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.862 [2024-11-04 12:33:53.169703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.862 qpair failed and we were unable to recover it. 00:29:18.862 [2024-11-04 12:33:53.179674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.862 [2024-11-04 12:33:53.179723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.862 [2024-11-04 12:33:53.179733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.862 [2024-11-04 12:33:53.179738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.862 [2024-11-04 12:33:53.179743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.862 [2024-11-04 12:33:53.179757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.862 qpair failed and we were unable to recover it. 00:29:18.862 [2024-11-04 12:33:53.189709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.862 [2024-11-04 12:33:53.189762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.862 [2024-11-04 12:33:53.189773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.862 [2024-11-04 12:33:53.189778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.862 [2024-11-04 12:33:53.189783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.862 [2024-11-04 12:33:53.189793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.862 qpair failed and we were unable to recover it. 00:29:18.862 [2024-11-04 12:33:53.199732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.862 [2024-11-04 12:33:53.199783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.862 [2024-11-04 12:33:53.199795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.862 [2024-11-04 12:33:53.199800] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.862 [2024-11-04 12:33:53.199805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.862 [2024-11-04 12:33:53.199816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.862 qpair failed and we were unable to recover it. 00:29:18.862 [2024-11-04 12:33:53.209716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.862 [2024-11-04 12:33:53.209762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.862 [2024-11-04 12:33:53.209772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.862 [2024-11-04 12:33:53.209778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.862 [2024-11-04 12:33:53.209783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.862 [2024-11-04 12:33:53.209793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.862 qpair failed and we were unable to recover it. 00:29:18.862 [2024-11-04 12:33:53.219792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.862 [2024-11-04 12:33:53.219844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.862 [2024-11-04 12:33:53.219853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.862 [2024-11-04 12:33:53.219859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.862 [2024-11-04 12:33:53.219864] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.862 [2024-11-04 12:33:53.219874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.862 qpair failed and we were unable to recover it. 00:29:18.862 [2024-11-04 12:33:53.229788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.862 [2024-11-04 12:33:53.229837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.862 [2024-11-04 12:33:53.229846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.862 [2024-11-04 12:33:53.229851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.863 [2024-11-04 12:33:53.229856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.863 [2024-11-04 12:33:53.229867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.863 qpair failed and we were unable to recover it. 00:29:18.863 [2024-11-04 12:33:53.239844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.863 [2024-11-04 12:33:53.239891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.863 [2024-11-04 12:33:53.239903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.863 [2024-11-04 12:33:53.239909] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.863 [2024-11-04 12:33:53.239913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.863 [2024-11-04 12:33:53.239924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.863 qpair failed and we were unable to recover it. 00:29:18.863 [2024-11-04 12:33:53.249830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.863 [2024-11-04 12:33:53.249879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.863 [2024-11-04 12:33:53.249888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.863 [2024-11-04 12:33:53.249894] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.863 [2024-11-04 12:33:53.249898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.863 [2024-11-04 12:33:53.249909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.863 qpair failed and we were unable to recover it. 00:29:18.863 [2024-11-04 12:33:53.259889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.863 [2024-11-04 12:33:53.259940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.863 [2024-11-04 12:33:53.259950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.863 [2024-11-04 12:33:53.259955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.863 [2024-11-04 12:33:53.259960] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.863 [2024-11-04 12:33:53.259970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.863 qpair failed and we were unable to recover it. 00:29:18.863 [2024-11-04 12:33:53.269933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.863 [2024-11-04 12:33:53.269997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.863 [2024-11-04 12:33:53.270007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.863 [2024-11-04 12:33:53.270012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.863 [2024-11-04 12:33:53.270017] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.863 [2024-11-04 12:33:53.270027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.863 qpair failed and we were unable to recover it. 00:29:18.863 [2024-11-04 12:33:53.279816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.863 [2024-11-04 12:33:53.279862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.863 [2024-11-04 12:33:53.279871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.863 [2024-11-04 12:33:53.279877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.863 [2024-11-04 12:33:53.279881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.863 [2024-11-04 12:33:53.279894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.863 qpair failed and we were unable to recover it. 00:29:18.863 [2024-11-04 12:33:53.289938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.863 [2024-11-04 12:33:53.290024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.863 [2024-11-04 12:33:53.290033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.863 [2024-11-04 12:33:53.290038] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.863 [2024-11-04 12:33:53.290043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.863 [2024-11-04 12:33:53.290053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.863 qpair failed and we were unable to recover it. 00:29:18.863 [2024-11-04 12:33:53.299986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.863 [2024-11-04 12:33:53.300038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.863 [2024-11-04 12:33:53.300048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.863 [2024-11-04 12:33:53.300053] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.863 [2024-11-04 12:33:53.300058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.863 [2024-11-04 12:33:53.300068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.863 qpair failed and we were unable to recover it. 00:29:18.863 [2024-11-04 12:33:53.309914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.863 [2024-11-04 12:33:53.309966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.863 [2024-11-04 12:33:53.309976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.863 [2024-11-04 12:33:53.309981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.863 [2024-11-04 12:33:53.309986] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.863 [2024-11-04 12:33:53.309996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.863 qpair failed and we were unable to recover it. 00:29:18.863 [2024-11-04 12:33:53.319944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.863 [2024-11-04 12:33:53.320043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.863 [2024-11-04 12:33:53.320053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.863 [2024-11-04 12:33:53.320058] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.863 [2024-11-04 12:33:53.320064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.863 [2024-11-04 12:33:53.320074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.863 qpair failed and we were unable to recover it. 00:29:18.863 [2024-11-04 12:33:53.330035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.863 [2024-11-04 12:33:53.330119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.863 [2024-11-04 12:33:53.330134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.863 [2024-11-04 12:33:53.330141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.863 [2024-11-04 12:33:53.330146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.863 [2024-11-04 12:33:53.330156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.863 qpair failed and we were unable to recover it. 00:29:18.863 [2024-11-04 12:33:53.340131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.863 [2024-11-04 12:33:53.340186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.863 [2024-11-04 12:33:53.340195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.863 [2024-11-04 12:33:53.340201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.863 [2024-11-04 12:33:53.340206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.863 [2024-11-04 12:33:53.340216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.863 qpair failed and we were unable to recover it. 00:29:18.863 [2024-11-04 12:33:53.350137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.863 [2024-11-04 12:33:53.350185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.863 [2024-11-04 12:33:53.350195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.863 [2024-11-04 12:33:53.350200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.863 [2024-11-04 12:33:53.350205] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.863 [2024-11-04 12:33:53.350215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.863 qpair failed and we were unable to recover it. 00:29:18.863 [2024-11-04 12:33:53.360205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.863 [2024-11-04 12:33:53.360254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.863 [2024-11-04 12:33:53.360264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.863 [2024-11-04 12:33:53.360269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.863 [2024-11-04 12:33:53.360274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.863 [2024-11-04 12:33:53.360284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.863 qpair failed and we were unable to recover it. 00:29:18.863 [2024-11-04 12:33:53.370147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.864 [2024-11-04 12:33:53.370193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.864 [2024-11-04 12:33:53.370203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.864 [2024-11-04 12:33:53.370208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.864 [2024-11-04 12:33:53.370216] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.864 [2024-11-04 12:33:53.370226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.864 qpair failed and we were unable to recover it. 00:29:18.864 [2024-11-04 12:33:53.380222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.864 [2024-11-04 12:33:53.380273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.864 [2024-11-04 12:33:53.380283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.864 [2024-11-04 12:33:53.380288] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.864 [2024-11-04 12:33:53.380292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.864 [2024-11-04 12:33:53.380303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.864 qpair failed and we were unable to recover it. 00:29:18.864 [2024-11-04 12:33:53.390243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.864 [2024-11-04 12:33:53.390291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.864 [2024-11-04 12:33:53.390301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.864 [2024-11-04 12:33:53.390306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.864 [2024-11-04 12:33:53.390311] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.864 [2024-11-04 12:33:53.390321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.864 qpair failed and we were unable to recover it. 00:29:18.864 [2024-11-04 12:33:53.400244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.864 [2024-11-04 12:33:53.400294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.864 [2024-11-04 12:33:53.400304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.864 [2024-11-04 12:33:53.400309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.864 [2024-11-04 12:33:53.400314] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.864 [2024-11-04 12:33:53.400324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.864 qpair failed and we were unable to recover it. 00:29:18.864 [2024-11-04 12:33:53.410227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.864 [2024-11-04 12:33:53.410277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.864 [2024-11-04 12:33:53.410287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.864 [2024-11-04 12:33:53.410293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.864 [2024-11-04 12:33:53.410297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.864 [2024-11-04 12:33:53.410307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.864 qpair failed and we were unable to recover it. 00:29:18.864 [2024-11-04 12:33:53.420334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.864 [2024-11-04 12:33:53.420390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.864 [2024-11-04 12:33:53.420400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.864 [2024-11-04 12:33:53.420405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.864 [2024-11-04 12:33:53.420410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:18.864 [2024-11-04 12:33:53.420420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.864 qpair failed and we were unable to recover it. 00:29:19.127 [2024-11-04 12:33:53.430341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.127 [2024-11-04 12:33:53.430389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.127 [2024-11-04 12:33:53.430399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.127 [2024-11-04 12:33:53.430404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.127 [2024-11-04 12:33:53.430409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.127 [2024-11-04 12:33:53.430420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.127 qpair failed and we were unable to recover it. 00:29:19.127 [2024-11-04 12:33:53.440336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.127 [2024-11-04 12:33:53.440382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.127 [2024-11-04 12:33:53.440392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.127 [2024-11-04 12:33:53.440397] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.127 [2024-11-04 12:33:53.440402] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.127 [2024-11-04 12:33:53.440412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.127 qpair failed and we were unable to recover it. 00:29:19.127 [2024-11-04 12:33:53.450366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.127 [2024-11-04 12:33:53.450408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.127 [2024-11-04 12:33:53.450418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.127 [2024-11-04 12:33:53.450423] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.127 [2024-11-04 12:33:53.450428] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.127 [2024-11-04 12:33:53.450438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.127 qpair failed and we were unable to recover it. 00:29:19.127 [2024-11-04 12:33:53.460401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.127 [2024-11-04 12:33:53.460449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.127 [2024-11-04 12:33:53.460459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.127 [2024-11-04 12:33:53.460464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.127 [2024-11-04 12:33:53.460471] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.127 [2024-11-04 12:33:53.460482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.127 qpair failed and we were unable to recover it. 00:29:19.127 [2024-11-04 12:33:53.470355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.127 [2024-11-04 12:33:53.470424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.127 [2024-11-04 12:33:53.470434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.127 [2024-11-04 12:33:53.470439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.127 [2024-11-04 12:33:53.470444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.127 [2024-11-04 12:33:53.470454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.127 qpair failed and we were unable to recover it. 00:29:19.127 [2024-11-04 12:33:53.480538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.127 [2024-11-04 12:33:53.480616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.127 [2024-11-04 12:33:53.480626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.127 [2024-11-04 12:33:53.480631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.127 [2024-11-04 12:33:53.480636] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.127 [2024-11-04 12:33:53.480646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.127 qpair failed and we were unable to recover it. 00:29:19.127 [2024-11-04 12:33:53.490362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.127 [2024-11-04 12:33:53.490404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.127 [2024-11-04 12:33:53.490414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.127 [2024-11-04 12:33:53.490420] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.127 [2024-11-04 12:33:53.490424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.127 [2024-11-04 12:33:53.490434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.127 qpair failed and we were unable to recover it. 00:29:19.127 [2024-11-04 12:33:53.500547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.127 [2024-11-04 12:33:53.500595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.127 [2024-11-04 12:33:53.500605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.127 [2024-11-04 12:33:53.500610] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.127 [2024-11-04 12:33:53.500615] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.127 [2024-11-04 12:33:53.500626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.127 qpair failed and we were unable to recover it. 00:29:19.127 [2024-11-04 12:33:53.510599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.127 [2024-11-04 12:33:53.510652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.127 [2024-11-04 12:33:53.510662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.127 [2024-11-04 12:33:53.510667] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.127 [2024-11-04 12:33:53.510672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.127 [2024-11-04 12:33:53.510683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.127 qpair failed and we were unable to recover it. 00:29:19.127 [2024-11-04 12:33:53.520612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.127 [2024-11-04 12:33:53.520656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.127 [2024-11-04 12:33:53.520666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.127 [2024-11-04 12:33:53.520672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.127 [2024-11-04 12:33:53.520676] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.127 [2024-11-04 12:33:53.520687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.127 qpair failed and we were unable to recover it. 00:29:19.127 [2024-11-04 12:33:53.530593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.127 [2024-11-04 12:33:53.530640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.127 [2024-11-04 12:33:53.530650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.127 [2024-11-04 12:33:53.530655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.127 [2024-11-04 12:33:53.530660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.127 [2024-11-04 12:33:53.530671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.127 qpair failed and we were unable to recover it. 00:29:19.127 [2024-11-04 12:33:53.540661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.127 [2024-11-04 12:33:53.540710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.128 [2024-11-04 12:33:53.540720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.128 [2024-11-04 12:33:53.540725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.128 [2024-11-04 12:33:53.540730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.128 [2024-11-04 12:33:53.540741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.128 qpair failed and we were unable to recover it. 00:29:19.128 [2024-11-04 12:33:53.550722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.128 [2024-11-04 12:33:53.550814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.128 [2024-11-04 12:33:53.550824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.128 [2024-11-04 12:33:53.550833] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.128 [2024-11-04 12:33:53.550838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.128 [2024-11-04 12:33:53.550849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.128 qpair failed and we were unable to recover it. 00:29:19.128 [2024-11-04 12:33:53.560717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.128 [2024-11-04 12:33:53.560766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.128 [2024-11-04 12:33:53.560776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.128 [2024-11-04 12:33:53.560782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.128 [2024-11-04 12:33:53.560786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.128 [2024-11-04 12:33:53.560797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.128 qpair failed and we were unable to recover it. 00:29:19.128 [2024-11-04 12:33:53.570712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.128 [2024-11-04 12:33:53.570765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.128 [2024-11-04 12:33:53.570775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.128 [2024-11-04 12:33:53.570780] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.128 [2024-11-04 12:33:53.570785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.128 [2024-11-04 12:33:53.570795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.128 qpair failed and we were unable to recover it. 00:29:19.128 [2024-11-04 12:33:53.580788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.128 [2024-11-04 12:33:53.580869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.128 [2024-11-04 12:33:53.580879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.128 [2024-11-04 12:33:53.580884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.128 [2024-11-04 12:33:53.580888] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.128 [2024-11-04 12:33:53.580899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.128 qpair failed and we were unable to recover it. 00:29:19.128 [2024-11-04 12:33:53.590807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.128 [2024-11-04 12:33:53.590862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.128 [2024-11-04 12:33:53.590872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.128 [2024-11-04 12:33:53.590877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.128 [2024-11-04 12:33:53.590882] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.128 [2024-11-04 12:33:53.590893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.128 qpair failed and we were unable to recover it. 00:29:19.128 [2024-11-04 12:33:53.600837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.128 [2024-11-04 12:33:53.600883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.128 [2024-11-04 12:33:53.600893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.128 [2024-11-04 12:33:53.600898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.128 [2024-11-04 12:33:53.600903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.128 [2024-11-04 12:33:53.600913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.128 qpair failed and we were unable to recover it. 00:29:19.128 [2024-11-04 12:33:53.610714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.128 [2024-11-04 12:33:53.610772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.128 [2024-11-04 12:33:53.610782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.128 [2024-11-04 12:33:53.610787] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.128 [2024-11-04 12:33:53.610792] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.128 [2024-11-04 12:33:53.610802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.128 qpair failed and we were unable to recover it. 00:29:19.128 [2024-11-04 12:33:53.620889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.128 [2024-11-04 12:33:53.620940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.128 [2024-11-04 12:33:53.620950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.128 [2024-11-04 12:33:53.620955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.128 [2024-11-04 12:33:53.620960] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.128 [2024-11-04 12:33:53.620970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.128 qpair failed and we were unable to recover it. 00:29:19.128 [2024-11-04 12:33:53.630910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.128 [2024-11-04 12:33:53.630957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.128 [2024-11-04 12:33:53.630967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.128 [2024-11-04 12:33:53.630972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.128 [2024-11-04 12:33:53.630976] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.128 [2024-11-04 12:33:53.630987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.128 qpair failed and we were unable to recover it. 00:29:19.128 [2024-11-04 12:33:53.640962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.128 [2024-11-04 12:33:53.641015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.128 [2024-11-04 12:33:53.641024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.128 [2024-11-04 12:33:53.641032] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.128 [2024-11-04 12:33:53.641037] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.128 [2024-11-04 12:33:53.641047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.128 qpair failed and we were unable to recover it. 00:29:19.128 [2024-11-04 12:33:53.650888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.128 [2024-11-04 12:33:53.650937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.128 [2024-11-04 12:33:53.650946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.128 [2024-11-04 12:33:53.650952] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.128 [2024-11-04 12:33:53.650957] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.128 [2024-11-04 12:33:53.650967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.128 qpair failed and we were unable to recover it. 00:29:19.128 [2024-11-04 12:33:53.661016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.128 [2024-11-04 12:33:53.661090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.128 [2024-11-04 12:33:53.661100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.128 [2024-11-04 12:33:53.661105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.128 [2024-11-04 12:33:53.661110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.128 [2024-11-04 12:33:53.661120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.128 qpair failed and we were unable to recover it. 00:29:19.128 [2024-11-04 12:33:53.671068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.128 [2024-11-04 12:33:53.671118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.128 [2024-11-04 12:33:53.671128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.128 [2024-11-04 12:33:53.671133] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.128 [2024-11-04 12:33:53.671138] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.129 [2024-11-04 12:33:53.671148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.129 qpair failed and we were unable to recover it. 00:29:19.129 [2024-11-04 12:33:53.681044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.129 [2024-11-04 12:33:53.681142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.129 [2024-11-04 12:33:53.681151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.129 [2024-11-04 12:33:53.681157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.129 [2024-11-04 12:33:53.681161] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.129 [2024-11-04 12:33:53.681171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.129 qpair failed and we were unable to recover it. 00:29:19.129 [2024-11-04 12:33:53.691031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.129 [2024-11-04 12:33:53.691069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.129 [2024-11-04 12:33:53.691079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.129 [2024-11-04 12:33:53.691084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.129 [2024-11-04 12:33:53.691090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.129 [2024-11-04 12:33:53.691099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.129 qpair failed and we were unable to recover it. 00:29:19.391 [2024-11-04 12:33:53.701124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.391 [2024-11-04 12:33:53.701214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.391 [2024-11-04 12:33:53.701224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.391 [2024-11-04 12:33:53.701229] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.391 [2024-11-04 12:33:53.701235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.391 [2024-11-04 12:33:53.701245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.391 qpair failed and we were unable to recover it. 00:29:19.392 [2024-11-04 12:33:53.711156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.392 [2024-11-04 12:33:53.711237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.392 [2024-11-04 12:33:53.711247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.392 [2024-11-04 12:33:53.711252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.392 [2024-11-04 12:33:53.711257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.392 [2024-11-04 12:33:53.711267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.392 qpair failed and we were unable to recover it. 00:29:19.392 [2024-11-04 12:33:53.721174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.392 [2024-11-04 12:33:53.721269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.392 [2024-11-04 12:33:53.721279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.392 [2024-11-04 12:33:53.721284] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.392 [2024-11-04 12:33:53.721288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.392 [2024-11-04 12:33:53.721299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.392 qpair failed and we were unable to recover it. 00:29:19.392 [2024-11-04 12:33:53.731162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.392 [2024-11-04 12:33:53.731202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.392 [2024-11-04 12:33:53.731215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.392 [2024-11-04 12:33:53.731220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.392 [2024-11-04 12:33:53.731224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.392 [2024-11-04 12:33:53.731235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.392 qpair failed and we were unable to recover it. 00:29:19.392 [2024-11-04 12:33:53.741248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.392 [2024-11-04 12:33:53.741300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.392 [2024-11-04 12:33:53.741309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.392 [2024-11-04 12:33:53.741314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.392 [2024-11-04 12:33:53.741319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.392 [2024-11-04 12:33:53.741329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.392 qpair failed and we were unable to recover it. 00:29:19.392 [2024-11-04 12:33:53.751276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.392 [2024-11-04 12:33:53.751329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.392 [2024-11-04 12:33:53.751339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.392 [2024-11-04 12:33:53.751344] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.392 [2024-11-04 12:33:53.751349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.392 [2024-11-04 12:33:53.751359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.392 qpair failed and we were unable to recover it. 00:29:19.392 [2024-11-04 12:33:53.761264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.392 [2024-11-04 12:33:53.761309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.392 [2024-11-04 12:33:53.761319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.392 [2024-11-04 12:33:53.761324] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.392 [2024-11-04 12:33:53.761329] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.392 [2024-11-04 12:33:53.761339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.392 qpair failed and we were unable to recover it. 00:29:19.392 [2024-11-04 12:33:53.771297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.392 [2024-11-04 12:33:53.771341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.392 [2024-11-04 12:33:53.771350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.392 [2024-11-04 12:33:53.771356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.392 [2024-11-04 12:33:53.771361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.392 [2024-11-04 12:33:53.771374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.392 qpair failed and we were unable to recover it. 00:29:19.392 [2024-11-04 12:33:53.781359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.392 [2024-11-04 12:33:53.781435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.392 [2024-11-04 12:33:53.781444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.392 [2024-11-04 12:33:53.781450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.392 [2024-11-04 12:33:53.781454] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.392 [2024-11-04 12:33:53.781465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.392 qpair failed and we were unable to recover it. 00:29:19.392 [2024-11-04 12:33:53.791373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.392 [2024-11-04 12:33:53.791442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.392 [2024-11-04 12:33:53.791452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.392 [2024-11-04 12:33:53.791457] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.392 [2024-11-04 12:33:53.791462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.392 [2024-11-04 12:33:53.791472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.392 qpair failed and we were unable to recover it. 00:29:19.392 [2024-11-04 12:33:53.801401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.392 [2024-11-04 12:33:53.801476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.392 [2024-11-04 12:33:53.801486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.392 [2024-11-04 12:33:53.801491] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.392 [2024-11-04 12:33:53.801496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.392 [2024-11-04 12:33:53.801507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.392 qpair failed and we were unable to recover it. 00:29:19.392 [2024-11-04 12:33:53.811392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.392 [2024-11-04 12:33:53.811487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.392 [2024-11-04 12:33:53.811506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.392 [2024-11-04 12:33:53.811512] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.392 [2024-11-04 12:33:53.811518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.392 [2024-11-04 12:33:53.811531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.392 qpair failed and we were unable to recover it. 00:29:19.392 [2024-11-04 12:33:53.821464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.392 [2024-11-04 12:33:53.821564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.392 [2024-11-04 12:33:53.821579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.392 [2024-11-04 12:33:53.821585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.392 [2024-11-04 12:33:53.821590] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.392 [2024-11-04 12:33:53.821601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.392 qpair failed and we were unable to recover it. 00:29:19.392 [2024-11-04 12:33:53.831473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.392 [2024-11-04 12:33:53.831531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.392 [2024-11-04 12:33:53.831550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.392 [2024-11-04 12:33:53.831556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.392 [2024-11-04 12:33:53.831562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.392 [2024-11-04 12:33:53.831575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.392 qpair failed and we were unable to recover it. 00:29:19.393 [2024-11-04 12:33:53.841465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.393 [2024-11-04 12:33:53.841512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.393 [2024-11-04 12:33:53.841530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.393 [2024-11-04 12:33:53.841537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.393 [2024-11-04 12:33:53.841542] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.393 [2024-11-04 12:33:53.841556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.393 qpair failed and we were unable to recover it. 00:29:19.393 [2024-11-04 12:33:53.851505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.393 [2024-11-04 12:33:53.851584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.393 [2024-11-04 12:33:53.851596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.393 [2024-11-04 12:33:53.851601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.393 [2024-11-04 12:33:53.851606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.393 [2024-11-04 12:33:53.851617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.393 qpair failed and we were unable to recover it. 00:29:19.393 [2024-11-04 12:33:53.861569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.393 [2024-11-04 12:33:53.861622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.393 [2024-11-04 12:33:53.861632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.393 [2024-11-04 12:33:53.861638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.393 [2024-11-04 12:33:53.861642] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.393 [2024-11-04 12:33:53.861656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.393 qpair failed and we were unable to recover it. 00:29:19.393 [2024-11-04 12:33:53.871617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.393 [2024-11-04 12:33:53.871665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.393 [2024-11-04 12:33:53.871675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.393 [2024-11-04 12:33:53.871680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.393 [2024-11-04 12:33:53.871685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.393 [2024-11-04 12:33:53.871695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.393 qpair failed and we were unable to recover it. 00:29:19.393 [2024-11-04 12:33:53.881578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.393 [2024-11-04 12:33:53.881620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.393 [2024-11-04 12:33:53.881630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.393 [2024-11-04 12:33:53.881635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.393 [2024-11-04 12:33:53.881640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.393 [2024-11-04 12:33:53.881650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.393 qpair failed and we were unable to recover it. 00:29:19.393 [2024-11-04 12:33:53.891499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.393 [2024-11-04 12:33:53.891552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.393 [2024-11-04 12:33:53.891562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.393 [2024-11-04 12:33:53.891567] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.393 [2024-11-04 12:33:53.891572] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.393 [2024-11-04 12:33:53.891583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.393 qpair failed and we were unable to recover it. 00:29:19.393 [2024-11-04 12:33:53.901677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.393 [2024-11-04 12:33:53.901727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.393 [2024-11-04 12:33:53.901738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.393 [2024-11-04 12:33:53.901743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.393 [2024-11-04 12:33:53.901750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.393 [2024-11-04 12:33:53.901761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.393 qpair failed and we were unable to recover it. 00:29:19.393 [2024-11-04 12:33:53.911717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.393 [2024-11-04 12:33:53.911783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.393 [2024-11-04 12:33:53.911793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.393 [2024-11-04 12:33:53.911799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.393 [2024-11-04 12:33:53.911804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.393 [2024-11-04 12:33:53.911814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.393 qpair failed and we were unable to recover it. 00:29:19.393 [2024-11-04 12:33:53.921676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.393 [2024-11-04 12:33:53.921723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.393 [2024-11-04 12:33:53.921733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.393 [2024-11-04 12:33:53.921739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.393 [2024-11-04 12:33:53.921743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.393 [2024-11-04 12:33:53.921756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.393 qpair failed and we were unable to recover it. 00:29:19.393 [2024-11-04 12:33:53.931716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.393 [2024-11-04 12:33:53.931764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.393 [2024-11-04 12:33:53.931774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.393 [2024-11-04 12:33:53.931780] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.393 [2024-11-04 12:33:53.931784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.393 [2024-11-04 12:33:53.931795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.393 qpair failed and we were unable to recover it. 00:29:19.393 [2024-11-04 12:33:53.941671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.393 [2024-11-04 12:33:53.941723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.393 [2024-11-04 12:33:53.941732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.393 [2024-11-04 12:33:53.941738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.393 [2024-11-04 12:33:53.941742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.393 [2024-11-04 12:33:53.941755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.393 qpair failed and we were unable to recover it. 00:29:19.393 [2024-11-04 12:33:53.951892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.393 [2024-11-04 12:33:53.951944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.393 [2024-11-04 12:33:53.951954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.393 [2024-11-04 12:33:53.951959] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.393 [2024-11-04 12:33:53.951968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.393 [2024-11-04 12:33:53.951979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.393 qpair failed and we were unable to recover it. 00:29:19.656 [2024-11-04 12:33:53.961814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.656 [2024-11-04 12:33:53.961858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.656 [2024-11-04 12:33:53.961868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.656 [2024-11-04 12:33:53.961873] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.656 [2024-11-04 12:33:53.961878] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.656 [2024-11-04 12:33:53.961888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.656 qpair failed and we were unable to recover it. 00:29:19.656 [2024-11-04 12:33:53.971839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.656 [2024-11-04 12:33:53.971890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.656 [2024-11-04 12:33:53.971899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.656 [2024-11-04 12:33:53.971905] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.656 [2024-11-04 12:33:53.971909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.656 [2024-11-04 12:33:53.971919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.656 qpair failed and we were unable to recover it. 00:29:19.656 [2024-11-04 12:33:53.981965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.656 [2024-11-04 12:33:53.982065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.656 [2024-11-04 12:33:53.982074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.656 [2024-11-04 12:33:53.982079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.656 [2024-11-04 12:33:53.982084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.657 [2024-11-04 12:33:53.982095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.657 qpair failed and we were unable to recover it. 00:29:19.657 [2024-11-04 12:33:53.991919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.657 [2024-11-04 12:33:53.991969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.657 [2024-11-04 12:33:53.991978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.657 [2024-11-04 12:33:53.991983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.657 [2024-11-04 12:33:53.991988] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.657 [2024-11-04 12:33:53.991998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.657 qpair failed and we were unable to recover it. 00:29:19.657 [2024-11-04 12:33:54.001809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.657 [2024-11-04 12:33:54.001858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.657 [2024-11-04 12:33:54.001868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.657 [2024-11-04 12:33:54.001873] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.657 [2024-11-04 12:33:54.001878] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.657 [2024-11-04 12:33:54.001888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.657 qpair failed and we were unable to recover it. 00:29:19.657 [2024-11-04 12:33:54.011952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.657 [2024-11-04 12:33:54.012046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.657 [2024-11-04 12:33:54.012056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.657 [2024-11-04 12:33:54.012062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.657 [2024-11-04 12:33:54.012066] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.657 [2024-11-04 12:33:54.012076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.657 qpair failed and we were unable to recover it. 00:29:19.657 [2024-11-04 12:33:54.022020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.657 [2024-11-04 12:33:54.022081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.657 [2024-11-04 12:33:54.022091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.657 [2024-11-04 12:33:54.022096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.657 [2024-11-04 12:33:54.022101] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.657 [2024-11-04 12:33:54.022112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.657 qpair failed and we were unable to recover it. 00:29:19.657 [2024-11-04 12:33:54.032034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.657 [2024-11-04 12:33:54.032087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.657 [2024-11-04 12:33:54.032096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.657 [2024-11-04 12:33:54.032102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.657 [2024-11-04 12:33:54.032106] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.657 [2024-11-04 12:33:54.032116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.657 qpair failed and we were unable to recover it. 00:29:19.657 [2024-11-04 12:33:54.042050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.657 [2024-11-04 12:33:54.042096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.657 [2024-11-04 12:33:54.042105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.657 [2024-11-04 12:33:54.042113] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.657 [2024-11-04 12:33:54.042118] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.657 [2024-11-04 12:33:54.042128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.657 qpair failed and we were unable to recover it. 00:29:19.657 [2024-11-04 12:33:54.052117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.657 [2024-11-04 12:33:54.052173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.657 [2024-11-04 12:33:54.052183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.657 [2024-11-04 12:33:54.052188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.657 [2024-11-04 12:33:54.052192] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.657 [2024-11-04 12:33:54.052202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.657 qpair failed and we were unable to recover it. 00:29:19.657 [2024-11-04 12:33:54.062068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.657 [2024-11-04 12:33:54.062109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.657 [2024-11-04 12:33:54.062118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.657 [2024-11-04 12:33:54.062123] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.657 [2024-11-04 12:33:54.062128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.657 [2024-11-04 12:33:54.062139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.657 qpair failed and we were unable to recover it. 00:29:19.657 [2024-11-04 12:33:54.072166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.657 [2024-11-04 12:33:54.072215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.657 [2024-11-04 12:33:54.072224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.657 [2024-11-04 12:33:54.072230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.657 [2024-11-04 12:33:54.072235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.657 [2024-11-04 12:33:54.072245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.657 qpair failed and we were unable to recover it. 00:29:19.657 [2024-11-04 12:33:54.082109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.657 [2024-11-04 12:33:54.082150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.657 [2024-11-04 12:33:54.082160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.657 [2024-11-04 12:33:54.082165] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.657 [2024-11-04 12:33:54.082170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.657 [2024-11-04 12:33:54.082180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.657 qpair failed and we were unable to recover it. 00:29:19.657 [2024-11-04 12:33:54.092148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.657 [2024-11-04 12:33:54.092190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.657 [2024-11-04 12:33:54.092200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.657 [2024-11-04 12:33:54.092205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.657 [2024-11-04 12:33:54.092210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.657 [2024-11-04 12:33:54.092220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.657 qpair failed and we were unable to recover it. 00:29:19.657 [2024-11-04 12:33:54.102185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.657 [2024-11-04 12:33:54.102228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.657 [2024-11-04 12:33:54.102238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.657 [2024-11-04 12:33:54.102244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.657 [2024-11-04 12:33:54.102248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.657 [2024-11-04 12:33:54.102259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.658 qpair failed and we were unable to recover it. 00:29:19.658 [2024-11-04 12:33:54.112231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.658 [2024-11-04 12:33:54.112282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.658 [2024-11-04 12:33:54.112291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.658 [2024-11-04 12:33:54.112297] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.658 [2024-11-04 12:33:54.112301] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.658 [2024-11-04 12:33:54.112313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.658 qpair failed and we were unable to recover it. 00:29:19.658 [2024-11-04 12:33:54.122106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.658 [2024-11-04 12:33:54.122149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.658 [2024-11-04 12:33:54.122159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.658 [2024-11-04 12:33:54.122164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.658 [2024-11-04 12:33:54.122169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.658 [2024-11-04 12:33:54.122179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.658 qpair failed and we were unable to recover it. 00:29:19.658 [2024-11-04 12:33:54.132138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.658 [2024-11-04 12:33:54.132178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.658 [2024-11-04 12:33:54.132190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.658 [2024-11-04 12:33:54.132202] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.658 [2024-11-04 12:33:54.132206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.658 [2024-11-04 12:33:54.132217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.658 qpair failed and we were unable to recover it. 00:29:19.658 [2024-11-04 12:33:54.142303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.658 [2024-11-04 12:33:54.142348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.658 [2024-11-04 12:33:54.142358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.658 [2024-11-04 12:33:54.142363] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.658 [2024-11-04 12:33:54.142368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.658 [2024-11-04 12:33:54.142379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.658 qpair failed and we were unable to recover it. 00:29:19.658 [2024-11-04 12:33:54.152373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.658 [2024-11-04 12:33:54.152471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.658 [2024-11-04 12:33:54.152482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.658 [2024-11-04 12:33:54.152487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.658 [2024-11-04 12:33:54.152492] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.658 [2024-11-04 12:33:54.152502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.658 qpair failed and we were unable to recover it. 00:29:19.658 [2024-11-04 12:33:54.162345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.658 [2024-11-04 12:33:54.162387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.658 [2024-11-04 12:33:54.162397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.658 [2024-11-04 12:33:54.162402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.658 [2024-11-04 12:33:54.162407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.658 [2024-11-04 12:33:54.162417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.658 qpair failed and we were unable to recover it. 00:29:19.658 [2024-11-04 12:33:54.172370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.658 [2024-11-04 12:33:54.172453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.658 [2024-11-04 12:33:54.172464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.658 [2024-11-04 12:33:54.172469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.658 [2024-11-04 12:33:54.172475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.658 [2024-11-04 12:33:54.172485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.658 qpair failed and we were unable to recover it. 00:29:19.658 [2024-11-04 12:33:54.182411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.658 [2024-11-04 12:33:54.182453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.658 [2024-11-04 12:33:54.182463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.658 [2024-11-04 12:33:54.182468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.658 [2024-11-04 12:33:54.182473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.658 [2024-11-04 12:33:54.182483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.658 qpair failed and we were unable to recover it. 00:29:19.658 [2024-11-04 12:33:54.192488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.658 [2024-11-04 12:33:54.192539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.658 [2024-11-04 12:33:54.192549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.658 [2024-11-04 12:33:54.192554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.658 [2024-11-04 12:33:54.192558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.658 [2024-11-04 12:33:54.192569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.658 qpair failed and we were unable to recover it. 00:29:19.658 [2024-11-04 12:33:54.202453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.658 [2024-11-04 12:33:54.202494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.658 [2024-11-04 12:33:54.202504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.658 [2024-11-04 12:33:54.202509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.658 [2024-11-04 12:33:54.202514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.658 [2024-11-04 12:33:54.202524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.658 qpair failed and we were unable to recover it. 00:29:19.658 [2024-11-04 12:33:54.212497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.658 [2024-11-04 12:33:54.212551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.658 [2024-11-04 12:33:54.212561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.658 [2024-11-04 12:33:54.212567] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.658 [2024-11-04 12:33:54.212571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.658 [2024-11-04 12:33:54.212582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.658 qpair failed and we were unable to recover it. 00:29:19.658 [2024-11-04 12:33:54.222538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.658 [2024-11-04 12:33:54.222587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.658 [2024-11-04 12:33:54.222599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.658 [2024-11-04 12:33:54.222604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.658 [2024-11-04 12:33:54.222609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.658 [2024-11-04 12:33:54.222619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.658 qpair failed and we were unable to recover it. 00:29:19.922 [2024-11-04 12:33:54.232600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.922 [2024-11-04 12:33:54.232678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.922 [2024-11-04 12:33:54.232688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.922 [2024-11-04 12:33:54.232693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.922 [2024-11-04 12:33:54.232698] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.922 [2024-11-04 12:33:54.232708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-11-04 12:33:54.242577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.922 [2024-11-04 12:33:54.242626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.922 [2024-11-04 12:33:54.242635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.922 [2024-11-04 12:33:54.242640] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.922 [2024-11-04 12:33:54.242645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.922 [2024-11-04 12:33:54.242656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-11-04 12:33:54.252617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.922 [2024-11-04 12:33:54.252668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.922 [2024-11-04 12:33:54.252677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.922 [2024-11-04 12:33:54.252683] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.922 [2024-11-04 12:33:54.252687] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.922 [2024-11-04 12:33:54.252697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-11-04 12:33:54.262672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.922 [2024-11-04 12:33:54.262789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.922 [2024-11-04 12:33:54.262800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.922 [2024-11-04 12:33:54.262805] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.922 [2024-11-04 12:33:54.262810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.922 [2024-11-04 12:33:54.262824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-11-04 12:33:54.272701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.922 [2024-11-04 12:33:54.272763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.922 [2024-11-04 12:33:54.272773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.922 [2024-11-04 12:33:54.272779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.922 [2024-11-04 12:33:54.272783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.922 [2024-11-04 12:33:54.272794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-11-04 12:33:54.282563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.922 [2024-11-04 12:33:54.282604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.922 [2024-11-04 12:33:54.282614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.922 [2024-11-04 12:33:54.282620] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.922 [2024-11-04 12:33:54.282624] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.922 [2024-11-04 12:33:54.282635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-11-04 12:33:54.292705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.922 [2024-11-04 12:33:54.292751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.922 [2024-11-04 12:33:54.292762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.922 [2024-11-04 12:33:54.292767] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.922 [2024-11-04 12:33:54.292772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.922 [2024-11-04 12:33:54.292782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.922 qpair failed and we were unable to recover it. 00:29:19.922 [2024-11-04 12:33:54.302720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.922 [2024-11-04 12:33:54.302763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.922 [2024-11-04 12:33:54.302774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.923 [2024-11-04 12:33:54.302779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.923 [2024-11-04 12:33:54.302784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.923 [2024-11-04 12:33:54.302794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-11-04 12:33:54.312833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.923 [2024-11-04 12:33:54.312879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.923 [2024-11-04 12:33:54.312892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.923 [2024-11-04 12:33:54.312897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.923 [2024-11-04 12:33:54.312902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.923 [2024-11-04 12:33:54.312912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-11-04 12:33:54.322788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.923 [2024-11-04 12:33:54.322837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.923 [2024-11-04 12:33:54.322846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.923 [2024-11-04 12:33:54.322851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.923 [2024-11-04 12:33:54.322856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.923 [2024-11-04 12:33:54.322866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-11-04 12:33:54.332788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.923 [2024-11-04 12:33:54.332827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.923 [2024-11-04 12:33:54.332837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.923 [2024-11-04 12:33:54.332842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.923 [2024-11-04 12:33:54.332846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.923 [2024-11-04 12:33:54.332857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-11-04 12:33:54.342843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.923 [2024-11-04 12:33:54.342886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.923 [2024-11-04 12:33:54.342896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.923 [2024-11-04 12:33:54.342902] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.923 [2024-11-04 12:33:54.342906] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.923 [2024-11-04 12:33:54.342916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-11-04 12:33:54.352958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.923 [2024-11-04 12:33:54.353037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.923 [2024-11-04 12:33:54.353046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.923 [2024-11-04 12:33:54.353052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.923 [2024-11-04 12:33:54.353056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.923 [2024-11-04 12:33:54.353069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-11-04 12:33:54.362967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.923 [2024-11-04 12:33:54.363031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.923 [2024-11-04 12:33:54.363041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.923 [2024-11-04 12:33:54.363046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.923 [2024-11-04 12:33:54.363051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.923 [2024-11-04 12:33:54.363062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-11-04 12:33:54.372992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.923 [2024-11-04 12:33:54.373059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.923 [2024-11-04 12:33:54.373069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.923 [2024-11-04 12:33:54.373074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.923 [2024-11-04 12:33:54.373078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.923 [2024-11-04 12:33:54.373089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-11-04 12:33:54.382981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.923 [2024-11-04 12:33:54.383027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.923 [2024-11-04 12:33:54.383037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.923 [2024-11-04 12:33:54.383042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.923 [2024-11-04 12:33:54.383046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.923 [2024-11-04 12:33:54.383057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-11-04 12:33:54.393070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.923 [2024-11-04 12:33:54.393111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.923 [2024-11-04 12:33:54.393120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.923 [2024-11-04 12:33:54.393126] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.923 [2024-11-04 12:33:54.393130] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.923 [2024-11-04 12:33:54.393141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-11-04 12:33:54.403016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.923 [2024-11-04 12:33:54.403056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.923 [2024-11-04 12:33:54.403069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.923 [2024-11-04 12:33:54.403074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.923 [2024-11-04 12:33:54.403078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.923 [2024-11-04 12:33:54.403089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-11-04 12:33:54.413066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.923 [2024-11-04 12:33:54.413107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.923 [2024-11-04 12:33:54.413117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.923 [2024-11-04 12:33:54.413122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.923 [2024-11-04 12:33:54.413126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.923 [2024-11-04 12:33:54.413137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.923 [2024-11-04 12:33:54.423095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.923 [2024-11-04 12:33:54.423139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.923 [2024-11-04 12:33:54.423149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.923 [2024-11-04 12:33:54.423154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.923 [2024-11-04 12:33:54.423158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.923 [2024-11-04 12:33:54.423169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.923 qpair failed and we were unable to recover it. 00:29:19.924 [2024-11-04 12:33:54.433147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.924 [2024-11-04 12:33:54.433242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.924 [2024-11-04 12:33:54.433252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.924 [2024-11-04 12:33:54.433258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.924 [2024-11-04 12:33:54.433262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.924 [2024-11-04 12:33:54.433272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-11-04 12:33:54.443122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.924 [2024-11-04 12:33:54.443163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.924 [2024-11-04 12:33:54.443173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.924 [2024-11-04 12:33:54.443178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.924 [2024-11-04 12:33:54.443185] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.924 [2024-11-04 12:33:54.443195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-11-04 12:33:54.453155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.924 [2024-11-04 12:33:54.453193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.924 [2024-11-04 12:33:54.453203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.924 [2024-11-04 12:33:54.453208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.924 [2024-11-04 12:33:54.453213] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.924 [2024-11-04 12:33:54.453223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-11-04 12:33:54.463180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.924 [2024-11-04 12:33:54.463224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.924 [2024-11-04 12:33:54.463235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.924 [2024-11-04 12:33:54.463240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.924 [2024-11-04 12:33:54.463244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.924 [2024-11-04 12:33:54.463254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-11-04 12:33:54.473264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.924 [2024-11-04 12:33:54.473312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.924 [2024-11-04 12:33:54.473321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.924 [2024-11-04 12:33:54.473327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.924 [2024-11-04 12:33:54.473331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.924 [2024-11-04 12:33:54.473341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.924 qpair failed and we were unable to recover it. 00:29:19.924 [2024-11-04 12:33:54.483239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.924 [2024-11-04 12:33:54.483281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.924 [2024-11-04 12:33:54.483291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.924 [2024-11-04 12:33:54.483296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.924 [2024-11-04 12:33:54.483301] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:19.924 [2024-11-04 12:33:54.483311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.924 qpair failed and we were unable to recover it. 00:29:20.187 [2024-11-04 12:33:54.493253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.187 [2024-11-04 12:33:54.493295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.187 [2024-11-04 12:33:54.493305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.187 [2024-11-04 12:33:54.493311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.187 [2024-11-04 12:33:54.493315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.187 [2024-11-04 12:33:54.493325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.187 qpair failed and we were unable to recover it. 00:29:20.187 [2024-11-04 12:33:54.503303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.187 [2024-11-04 12:33:54.503390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.187 [2024-11-04 12:33:54.503399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.187 [2024-11-04 12:33:54.503405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.187 [2024-11-04 12:33:54.503409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.187 [2024-11-04 12:33:54.503419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.187 qpair failed and we were unable to recover it. 00:29:20.187 [2024-11-04 12:33:54.513399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.187 [2024-11-04 12:33:54.513444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.187 [2024-11-04 12:33:54.513454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.187 [2024-11-04 12:33:54.513459] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.187 [2024-11-04 12:33:54.513464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.187 [2024-11-04 12:33:54.513474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.187 qpair failed and we were unable to recover it. 00:29:20.187 [2024-11-04 12:33:54.523319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.187 [2024-11-04 12:33:54.523365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.187 [2024-11-04 12:33:54.523375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.187 [2024-11-04 12:33:54.523380] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.187 [2024-11-04 12:33:54.523385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.187 [2024-11-04 12:33:54.523395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.187 qpair failed and we were unable to recover it. 00:29:20.187 [2024-11-04 12:33:54.533357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.187 [2024-11-04 12:33:54.533396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.187 [2024-11-04 12:33:54.533406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.187 [2024-11-04 12:33:54.533412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.188 [2024-11-04 12:33:54.533419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.188 [2024-11-04 12:33:54.533429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.188 qpair failed and we were unable to recover it. 00:29:20.188 [2024-11-04 12:33:54.543386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.188 [2024-11-04 12:33:54.543429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.188 [2024-11-04 12:33:54.543439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.188 [2024-11-04 12:33:54.543444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.188 [2024-11-04 12:33:54.543449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.188 [2024-11-04 12:33:54.543459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.188 qpair failed and we were unable to recover it. 00:29:20.188 [2024-11-04 12:33:54.553494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.188 [2024-11-04 12:33:54.553543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.188 [2024-11-04 12:33:54.553562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.188 [2024-11-04 12:33:54.553568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.188 [2024-11-04 12:33:54.553573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.188 [2024-11-04 12:33:54.553588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.188 qpair failed and we were unable to recover it. 00:29:20.188 [2024-11-04 12:33:54.563465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.188 [2024-11-04 12:33:54.563514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.188 [2024-11-04 12:33:54.563532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.188 [2024-11-04 12:33:54.563539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.188 [2024-11-04 12:33:54.563544] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.188 [2024-11-04 12:33:54.563558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.188 qpair failed and we were unable to recover it. 00:29:20.188 [2024-11-04 12:33:54.573490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.188 [2024-11-04 12:33:54.573535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.188 [2024-11-04 12:33:54.573555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.188 [2024-11-04 12:33:54.573561] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.188 [2024-11-04 12:33:54.573566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.188 [2024-11-04 12:33:54.573580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.188 qpair failed and we were unable to recover it. 00:29:20.188 [2024-11-04 12:33:54.583485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.188 [2024-11-04 12:33:54.583530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.188 [2024-11-04 12:33:54.583541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.188 [2024-11-04 12:33:54.583547] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.188 [2024-11-04 12:33:54.583552] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.188 [2024-11-04 12:33:54.583563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.188 qpair failed and we were unable to recover it. 00:29:20.188 [2024-11-04 12:33:54.593598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.188 [2024-11-04 12:33:54.593670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.188 [2024-11-04 12:33:54.593680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.188 [2024-11-04 12:33:54.593685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.188 [2024-11-04 12:33:54.593690] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.188 [2024-11-04 12:33:54.593701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.188 qpair failed and we were unable to recover it. 00:29:20.188 [2024-11-04 12:33:54.603575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.188 [2024-11-04 12:33:54.603614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.188 [2024-11-04 12:33:54.603625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.188 [2024-11-04 12:33:54.603630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.188 [2024-11-04 12:33:54.603634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.188 [2024-11-04 12:33:54.603645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.188 qpair failed and we were unable to recover it. 00:29:20.188 [2024-11-04 12:33:54.613594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.188 [2024-11-04 12:33:54.613662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.188 [2024-11-04 12:33:54.613672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.188 [2024-11-04 12:33:54.613678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.188 [2024-11-04 12:33:54.613682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.188 [2024-11-04 12:33:54.613693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.188 qpair failed and we were unable to recover it. 00:29:20.188 [2024-11-04 12:33:54.623634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.188 [2024-11-04 12:33:54.623694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.188 [2024-11-04 12:33:54.623704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.188 [2024-11-04 12:33:54.623712] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.188 [2024-11-04 12:33:54.623717] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.188 [2024-11-04 12:33:54.623728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.188 qpair failed and we were unable to recover it. 00:29:20.188 [2024-11-04 12:33:54.633678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.188 [2024-11-04 12:33:54.633726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.188 [2024-11-04 12:33:54.633736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.188 [2024-11-04 12:33:54.633741] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.188 [2024-11-04 12:33:54.633750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.188 [2024-11-04 12:33:54.633761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.188 qpair failed and we were unable to recover it. 00:29:20.188 [2024-11-04 12:33:54.643674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.188 [2024-11-04 12:33:54.643712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.188 [2024-11-04 12:33:54.643722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.188 [2024-11-04 12:33:54.643728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.188 [2024-11-04 12:33:54.643733] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.188 [2024-11-04 12:33:54.643743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.188 qpair failed and we were unable to recover it. 00:29:20.188 [2024-11-04 12:33:54.653712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.188 [2024-11-04 12:33:54.653758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.188 [2024-11-04 12:33:54.653768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.188 [2024-11-04 12:33:54.653773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.188 [2024-11-04 12:33:54.653778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.188 [2024-11-04 12:33:54.653789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.188 qpair failed and we were unable to recover it. 00:29:20.188 [2024-11-04 12:33:54.663628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.188 [2024-11-04 12:33:54.663675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.188 [2024-11-04 12:33:54.663684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.188 [2024-11-04 12:33:54.663690] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.188 [2024-11-04 12:33:54.663694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.189 [2024-11-04 12:33:54.663705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.189 qpair failed and we were unable to recover it. 00:29:20.189 [2024-11-04 12:33:54.673807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.189 [2024-11-04 12:33:54.673856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.189 [2024-11-04 12:33:54.673866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.189 [2024-11-04 12:33:54.673871] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.189 [2024-11-04 12:33:54.673875] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.189 [2024-11-04 12:33:54.673886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.189 qpair failed and we were unable to recover it. 00:29:20.189 [2024-11-04 12:33:54.683761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.189 [2024-11-04 12:33:54.683886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.189 [2024-11-04 12:33:54.683896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.189 [2024-11-04 12:33:54.683901] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.189 [2024-11-04 12:33:54.683906] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.189 [2024-11-04 12:33:54.683916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.189 qpair failed and we were unable to recover it. 00:29:20.189 [2024-11-04 12:33:54.693816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.189 [2024-11-04 12:33:54.693859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.189 [2024-11-04 12:33:54.693869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.189 [2024-11-04 12:33:54.693874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.189 [2024-11-04 12:33:54.693879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.189 [2024-11-04 12:33:54.693889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.189 qpair failed and we were unable to recover it. 00:29:20.189 [2024-11-04 12:33:54.703864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.189 [2024-11-04 12:33:54.703907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.189 [2024-11-04 12:33:54.703917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.189 [2024-11-04 12:33:54.703922] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.189 [2024-11-04 12:33:54.703927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.189 [2024-11-04 12:33:54.703938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.189 qpair failed and we were unable to recover it. 00:29:20.189 [2024-11-04 12:33:54.713915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.189 [2024-11-04 12:33:54.713967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.189 [2024-11-04 12:33:54.713979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.189 [2024-11-04 12:33:54.713984] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.189 [2024-11-04 12:33:54.713989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.189 [2024-11-04 12:33:54.713999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.189 qpair failed and we were unable to recover it. 00:29:20.189 [2024-11-04 12:33:54.723897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.189 [2024-11-04 12:33:54.723938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.189 [2024-11-04 12:33:54.723949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.189 [2024-11-04 12:33:54.723955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.189 [2024-11-04 12:33:54.723962] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.189 [2024-11-04 12:33:54.723973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.189 qpair failed and we were unable to recover it. 00:29:20.189 [2024-11-04 12:33:54.733946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.189 [2024-11-04 12:33:54.733990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.189 [2024-11-04 12:33:54.734000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.189 [2024-11-04 12:33:54.734005] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.189 [2024-11-04 12:33:54.734010] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.189 [2024-11-04 12:33:54.734020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.189 qpair failed and we were unable to recover it. 00:29:20.189 [2024-11-04 12:33:54.743993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.189 [2024-11-04 12:33:54.744037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.189 [2024-11-04 12:33:54.744047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.189 [2024-11-04 12:33:54.744052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.189 [2024-11-04 12:33:54.744057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.189 [2024-11-04 12:33:54.744067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.189 qpair failed and we were unable to recover it. 00:29:20.189 [2024-11-04 12:33:54.754027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.189 [2024-11-04 12:33:54.754103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.189 [2024-11-04 12:33:54.754113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.189 [2024-11-04 12:33:54.754119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.189 [2024-11-04 12:33:54.754123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.189 [2024-11-04 12:33:54.754134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.189 qpair failed and we were unable to recover it. 00:29:20.453 [2024-11-04 12:33:54.763894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.453 [2024-11-04 12:33:54.763935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.453 [2024-11-04 12:33:54.763945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.453 [2024-11-04 12:33:54.763950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.453 [2024-11-04 12:33:54.763955] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.453 [2024-11-04 12:33:54.763965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.453 qpair failed and we were unable to recover it. 00:29:20.453 [2024-11-04 12:33:54.774031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.453 [2024-11-04 12:33:54.774070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.453 [2024-11-04 12:33:54.774080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.453 [2024-11-04 12:33:54.774086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.453 [2024-11-04 12:33:54.774090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.453 [2024-11-04 12:33:54.774101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.453 qpair failed and we were unable to recover it. 00:29:20.453 [2024-11-04 12:33:54.784087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.453 [2024-11-04 12:33:54.784181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.453 [2024-11-04 12:33:54.784191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.453 [2024-11-04 12:33:54.784197] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.453 [2024-11-04 12:33:54.784201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.453 [2024-11-04 12:33:54.784212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.453 qpair failed and we were unable to recover it. 00:29:20.453 [2024-11-04 12:33:54.794140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.453 [2024-11-04 12:33:54.794192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.453 [2024-11-04 12:33:54.794202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.453 [2024-11-04 12:33:54.794207] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.453 [2024-11-04 12:33:54.794212] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.453 [2024-11-04 12:33:54.794222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.453 qpair failed and we were unable to recover it. 00:29:20.453 [2024-11-04 12:33:54.804025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.453 [2024-11-04 12:33:54.804063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.453 [2024-11-04 12:33:54.804077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.453 [2024-11-04 12:33:54.804083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.453 [2024-11-04 12:33:54.804088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.453 [2024-11-04 12:33:54.804098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.453 qpair failed and we were unable to recover it. 00:29:20.453 [2024-11-04 12:33:54.814141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.453 [2024-11-04 12:33:54.814219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.453 [2024-11-04 12:33:54.814229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.453 [2024-11-04 12:33:54.814234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.453 [2024-11-04 12:33:54.814239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.453 [2024-11-04 12:33:54.814250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.453 qpair failed and we were unable to recover it. 00:29:20.453 [2024-11-04 12:33:54.824189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.453 [2024-11-04 12:33:54.824232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.453 [2024-11-04 12:33:54.824242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.453 [2024-11-04 12:33:54.824247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.453 [2024-11-04 12:33:54.824252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.453 [2024-11-04 12:33:54.824262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.453 qpair failed and we were unable to recover it. 00:29:20.454 [2024-11-04 12:33:54.834272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.454 [2024-11-04 12:33:54.834317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.454 [2024-11-04 12:33:54.834327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.454 [2024-11-04 12:33:54.834332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.454 [2024-11-04 12:33:54.834337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.454 [2024-11-04 12:33:54.834347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.454 qpair failed and we were unable to recover it. 00:29:20.454 [2024-11-04 12:33:54.844284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.454 [2024-11-04 12:33:54.844359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.454 [2024-11-04 12:33:54.844369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.454 [2024-11-04 12:33:54.844374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.454 [2024-11-04 12:33:54.844379] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.454 [2024-11-04 12:33:54.844396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.454 qpair failed and we were unable to recover it. 00:29:20.454 [2024-11-04 12:33:54.854267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.454 [2024-11-04 12:33:54.854317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.454 [2024-11-04 12:33:54.854326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.454 [2024-11-04 12:33:54.854332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.454 [2024-11-04 12:33:54.854336] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.454 [2024-11-04 12:33:54.854347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.454 qpair failed and we were unable to recover it. 00:29:20.454 [2024-11-04 12:33:54.864285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.454 [2024-11-04 12:33:54.864340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.454 [2024-11-04 12:33:54.864350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.454 [2024-11-04 12:33:54.864355] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.454 [2024-11-04 12:33:54.864360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.454 [2024-11-04 12:33:54.864371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.454 qpair failed and we were unable to recover it. 00:29:20.454 [2024-11-04 12:33:54.874371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.454 [2024-11-04 12:33:54.874418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.454 [2024-11-04 12:33:54.874427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.454 [2024-11-04 12:33:54.874432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.454 [2024-11-04 12:33:54.874437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.454 [2024-11-04 12:33:54.874447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.454 qpair failed and we were unable to recover it. 00:29:20.454 [2024-11-04 12:33:54.884353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.454 [2024-11-04 12:33:54.884420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.454 [2024-11-04 12:33:54.884430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.454 [2024-11-04 12:33:54.884435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.454 [2024-11-04 12:33:54.884439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.454 [2024-11-04 12:33:54.884449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.454 qpair failed and we were unable to recover it. 00:29:20.454 [2024-11-04 12:33:54.894378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.454 [2024-11-04 12:33:54.894465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.454 [2024-11-04 12:33:54.894481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.454 [2024-11-04 12:33:54.894486] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.454 [2024-11-04 12:33:54.894491] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.454 [2024-11-04 12:33:54.894503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.454 qpair failed and we were unable to recover it. 00:29:20.454 [2024-11-04 12:33:54.904368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.454 [2024-11-04 12:33:54.904412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.454 [2024-11-04 12:33:54.904422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.454 [2024-11-04 12:33:54.904427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.454 [2024-11-04 12:33:54.904432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.454 [2024-11-04 12:33:54.904442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.454 qpair failed and we were unable to recover it. 00:29:20.454 [2024-11-04 12:33:54.914390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.454 [2024-11-04 12:33:54.914438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.454 [2024-11-04 12:33:54.914448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.454 [2024-11-04 12:33:54.914454] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.454 [2024-11-04 12:33:54.914458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.454 [2024-11-04 12:33:54.914468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.454 qpair failed and we were unable to recover it. 00:29:20.454 [2024-11-04 12:33:54.924445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.454 [2024-11-04 12:33:54.924490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.454 [2024-11-04 12:33:54.924509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.454 [2024-11-04 12:33:54.924516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.454 [2024-11-04 12:33:54.924521] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.454 [2024-11-04 12:33:54.924535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.454 qpair failed and we were unable to recover it. 00:29:20.454 [2024-11-04 12:33:54.934544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.454 [2024-11-04 12:33:54.934589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.454 [2024-11-04 12:33:54.934601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.454 [2024-11-04 12:33:54.934606] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.454 [2024-11-04 12:33:54.934615] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.454 [2024-11-04 12:33:54.934626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.454 qpair failed and we were unable to recover it. 00:29:20.454 [2024-11-04 12:33:54.944529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.454 [2024-11-04 12:33:54.944574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.454 [2024-11-04 12:33:54.944584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.454 [2024-11-04 12:33:54.944590] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.454 [2024-11-04 12:33:54.944595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.454 [2024-11-04 12:33:54.944606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.454 qpair failed and we were unable to recover it. 00:29:20.454 [2024-11-04 12:33:54.954560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.454 [2024-11-04 12:33:54.954612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.454 [2024-11-04 12:33:54.954622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.454 [2024-11-04 12:33:54.954627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.454 [2024-11-04 12:33:54.954632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.454 [2024-11-04 12:33:54.954643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.454 qpair failed and we were unable to recover it. 00:29:20.454 [2024-11-04 12:33:54.964549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.454 [2024-11-04 12:33:54.964597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.454 [2024-11-04 12:33:54.964607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.455 [2024-11-04 12:33:54.964612] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.455 [2024-11-04 12:33:54.964616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.455 [2024-11-04 12:33:54.964627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.455 qpair failed and we were unable to recover it. 00:29:20.455 [2024-11-04 12:33:54.974475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.455 [2024-11-04 12:33:54.974523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.455 [2024-11-04 12:33:54.974533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.455 [2024-11-04 12:33:54.974538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.455 [2024-11-04 12:33:54.974543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.455 [2024-11-04 12:33:54.974553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.455 qpair failed and we were unable to recover it. 00:29:20.455 [2024-11-04 12:33:54.984618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.455 [2024-11-04 12:33:54.984699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.455 [2024-11-04 12:33:54.984709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.455 [2024-11-04 12:33:54.984714] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.455 [2024-11-04 12:33:54.984719] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.455 [2024-11-04 12:33:54.984731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.455 qpair failed and we were unable to recover it. 00:29:20.455 [2024-11-04 12:33:54.994689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.455 [2024-11-04 12:33:54.994782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.455 [2024-11-04 12:33:54.994792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.455 [2024-11-04 12:33:54.994798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.455 [2024-11-04 12:33:54.994803] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.455 [2024-11-04 12:33:54.994814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.455 qpair failed and we were unable to recover it. 00:29:20.455 [2024-11-04 12:33:55.004662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.455 [2024-11-04 12:33:55.004706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.455 [2024-11-04 12:33:55.004716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.455 [2024-11-04 12:33:55.004721] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.455 [2024-11-04 12:33:55.004726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.455 [2024-11-04 12:33:55.004736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.455 qpair failed and we were unable to recover it. 00:29:20.455 [2024-11-04 12:33:55.014706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.455 [2024-11-04 12:33:55.014753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.455 [2024-11-04 12:33:55.014763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.455 [2024-11-04 12:33:55.014768] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.455 [2024-11-04 12:33:55.014773] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.455 [2024-11-04 12:33:55.014783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.455 qpair failed and we were unable to recover it. 00:29:20.717 [2024-11-04 12:33:55.024733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.717 [2024-11-04 12:33:55.024810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.717 [2024-11-04 12:33:55.024820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.717 [2024-11-04 12:33:55.024826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.717 [2024-11-04 12:33:55.024833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.717 [2024-11-04 12:33:55.024844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.717 qpair failed and we were unable to recover it. 00:29:20.717 [2024-11-04 12:33:55.034811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.717 [2024-11-04 12:33:55.034861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.717 [2024-11-04 12:33:55.034870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.717 [2024-11-04 12:33:55.034875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.717 [2024-11-04 12:33:55.034880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.717 [2024-11-04 12:33:55.034890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.717 qpair failed and we were unable to recover it. 00:29:20.717 [2024-11-04 12:33:55.044788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.717 [2024-11-04 12:33:55.044827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.717 [2024-11-04 12:33:55.044836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.717 [2024-11-04 12:33:55.044841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.717 [2024-11-04 12:33:55.044846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.717 [2024-11-04 12:33:55.044856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.717 qpair failed and we were unable to recover it. 00:29:20.717 [2024-11-04 12:33:55.054772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.717 [2024-11-04 12:33:55.054815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.717 [2024-11-04 12:33:55.054825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.717 [2024-11-04 12:33:55.054830] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.717 [2024-11-04 12:33:55.054835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.717 [2024-11-04 12:33:55.054845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.717 qpair failed and we were unable to recover it. 00:29:20.717 [2024-11-04 12:33:55.064842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.717 [2024-11-04 12:33:55.064882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.717 [2024-11-04 12:33:55.064891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.717 [2024-11-04 12:33:55.064897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.717 [2024-11-04 12:33:55.064901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.717 [2024-11-04 12:33:55.064912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.717 qpair failed and we were unable to recover it. 00:29:20.717 [2024-11-04 12:33:55.074908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.717 [2024-11-04 12:33:55.074957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.717 [2024-11-04 12:33:55.074967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.717 [2024-11-04 12:33:55.074972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.717 [2024-11-04 12:33:55.074977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.717 [2024-11-04 12:33:55.074987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.717 qpair failed and we were unable to recover it. 00:29:20.717 [2024-11-04 12:33:55.084871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.717 [2024-11-04 12:33:55.084912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.717 [2024-11-04 12:33:55.084922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.717 [2024-11-04 12:33:55.084927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.718 [2024-11-04 12:33:55.084932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.718 [2024-11-04 12:33:55.084943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.718 qpair failed and we were unable to recover it. 00:29:20.718 [2024-11-04 12:33:55.094884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.718 [2024-11-04 12:33:55.094923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.718 [2024-11-04 12:33:55.094933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.718 [2024-11-04 12:33:55.094938] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.718 [2024-11-04 12:33:55.094943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.718 [2024-11-04 12:33:55.094953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.718 qpair failed and we were unable to recover it. 00:29:20.718 [2024-11-04 12:33:55.104932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.718 [2024-11-04 12:33:55.104977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.718 [2024-11-04 12:33:55.104987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.718 [2024-11-04 12:33:55.104992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.718 [2024-11-04 12:33:55.104996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.718 [2024-11-04 12:33:55.105007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.718 qpair failed and we were unable to recover it. 00:29:20.718 [2024-11-04 12:33:55.115029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.718 [2024-11-04 12:33:55.115121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.718 [2024-11-04 12:33:55.115131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.718 [2024-11-04 12:33:55.115139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.718 [2024-11-04 12:33:55.115143] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.718 [2024-11-04 12:33:55.115154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.718 qpair failed and we were unable to recover it. 00:29:20.718 [2024-11-04 12:33:55.124974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.718 [2024-11-04 12:33:55.125014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.718 [2024-11-04 12:33:55.125024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.718 [2024-11-04 12:33:55.125029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.718 [2024-11-04 12:33:55.125034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.718 [2024-11-04 12:33:55.125044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.718 qpair failed and we were unable to recover it. 00:29:20.718 [2024-11-04 12:33:55.134985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.718 [2024-11-04 12:33:55.135025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.718 [2024-11-04 12:33:55.135035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.718 [2024-11-04 12:33:55.135041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.718 [2024-11-04 12:33:55.135045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.718 [2024-11-04 12:33:55.135056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.718 qpair failed and we were unable to recover it. 00:29:20.718 [2024-11-04 12:33:55.145052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.718 [2024-11-04 12:33:55.145093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.718 [2024-11-04 12:33:55.145103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.718 [2024-11-04 12:33:55.145108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.718 [2024-11-04 12:33:55.145113] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.718 [2024-11-04 12:33:55.145123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.718 qpair failed and we were unable to recover it. 00:29:20.718 [2024-11-04 12:33:55.155123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.718 [2024-11-04 12:33:55.155172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.718 [2024-11-04 12:33:55.155182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.718 [2024-11-04 12:33:55.155188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.718 [2024-11-04 12:33:55.155193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.718 [2024-11-04 12:33:55.155203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.718 qpair failed and we were unable to recover it. 00:29:20.718 [2024-11-04 12:33:55.165114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.718 [2024-11-04 12:33:55.165154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.718 [2024-11-04 12:33:55.165164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.718 [2024-11-04 12:33:55.165169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.718 [2024-11-04 12:33:55.165174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.718 [2024-11-04 12:33:55.165184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.718 qpair failed and we were unable to recover it. 00:29:20.718 [2024-11-04 12:33:55.175152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.718 [2024-11-04 12:33:55.175195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.718 [2024-11-04 12:33:55.175205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.718 [2024-11-04 12:33:55.175210] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.718 [2024-11-04 12:33:55.175216] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.718 [2024-11-04 12:33:55.175227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.718 qpair failed and we were unable to recover it. 00:29:20.718 [2024-11-04 12:33:55.185023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.718 [2024-11-04 12:33:55.185065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.718 [2024-11-04 12:33:55.185077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.718 [2024-11-04 12:33:55.185082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.718 [2024-11-04 12:33:55.185087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.718 [2024-11-04 12:33:55.185098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.718 qpair failed and we were unable to recover it. 00:29:20.718 [2024-11-04 12:33:55.195285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.718 [2024-11-04 12:33:55.195336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.718 [2024-11-04 12:33:55.195346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.718 [2024-11-04 12:33:55.195351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.718 [2024-11-04 12:33:55.195355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.718 [2024-11-04 12:33:55.195366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.718 qpair failed and we were unable to recover it. 00:29:20.718 [2024-11-04 12:33:55.205216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.718 [2024-11-04 12:33:55.205257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.718 [2024-11-04 12:33:55.205267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.718 [2024-11-04 12:33:55.205275] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.718 [2024-11-04 12:33:55.205280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.718 [2024-11-04 12:33:55.205290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.718 qpair failed and we were unable to recover it. 00:29:20.719 [2024-11-04 12:33:55.215259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.719 [2024-11-04 12:33:55.215301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.719 [2024-11-04 12:33:55.215311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.719 [2024-11-04 12:33:55.215316] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.719 [2024-11-04 12:33:55.215321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.719 [2024-11-04 12:33:55.215331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.719 qpair failed and we were unable to recover it. 00:29:20.719 [2024-11-04 12:33:55.225268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.719 [2024-11-04 12:33:55.225325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.719 [2024-11-04 12:33:55.225335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.719 [2024-11-04 12:33:55.225340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.719 [2024-11-04 12:33:55.225344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.719 [2024-11-04 12:33:55.225355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.719 qpair failed and we were unable to recover it. 00:29:20.719 [2024-11-04 12:33:55.235343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.719 [2024-11-04 12:33:55.235390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.719 [2024-11-04 12:33:55.235400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.719 [2024-11-04 12:33:55.235405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.719 [2024-11-04 12:33:55.235410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.719 [2024-11-04 12:33:55.235420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.719 qpair failed and we were unable to recover it. 00:29:20.719 [2024-11-04 12:33:55.245309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.719 [2024-11-04 12:33:55.245350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.719 [2024-11-04 12:33:55.245360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.719 [2024-11-04 12:33:55.245365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.719 [2024-11-04 12:33:55.245370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.719 [2024-11-04 12:33:55.245380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.719 qpair failed and we were unable to recover it. 00:29:20.719 [2024-11-04 12:33:55.255337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.719 [2024-11-04 12:33:55.255425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.719 [2024-11-04 12:33:55.255435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.719 [2024-11-04 12:33:55.255441] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.719 [2024-11-04 12:33:55.255446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.719 [2024-11-04 12:33:55.255456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.719 qpair failed and we were unable to recover it. 00:29:20.719 [2024-11-04 12:33:55.265371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.719 [2024-11-04 12:33:55.265414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.719 [2024-11-04 12:33:55.265424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.719 [2024-11-04 12:33:55.265429] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.719 [2024-11-04 12:33:55.265433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.719 [2024-11-04 12:33:55.265444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.719 qpair failed and we were unable to recover it. 00:29:20.719 [2024-11-04 12:33:55.275433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.719 [2024-11-04 12:33:55.275483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.719 [2024-11-04 12:33:55.275502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.719 [2024-11-04 12:33:55.275508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.719 [2024-11-04 12:33:55.275514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.719 [2024-11-04 12:33:55.275528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.719 qpair failed and we were unable to recover it. 00:29:20.981 [2024-11-04 12:33:55.285425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.981 [2024-11-04 12:33:55.285475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.981 [2024-11-04 12:33:55.285493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.981 [2024-11-04 12:33:55.285499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.981 [2024-11-04 12:33:55.285505] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.981 [2024-11-04 12:33:55.285518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.981 qpair failed and we were unable to recover it. 00:29:20.981 [2024-11-04 12:33:55.295393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.981 [2024-11-04 12:33:55.295441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.981 [2024-11-04 12:33:55.295463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.981 [2024-11-04 12:33:55.295470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.981 [2024-11-04 12:33:55.295475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.981 [2024-11-04 12:33:55.295489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.981 qpair failed and we were unable to recover it. 00:29:20.981 [2024-11-04 12:33:55.305479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.981 [2024-11-04 12:33:55.305523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.981 [2024-11-04 12:33:55.305535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.981 [2024-11-04 12:33:55.305540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.981 [2024-11-04 12:33:55.305545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.981 [2024-11-04 12:33:55.305556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.981 qpair failed and we were unable to recover it. 00:29:20.981 [2024-11-04 12:33:55.315546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.981 [2024-11-04 12:33:55.315591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.981 [2024-11-04 12:33:55.315601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.981 [2024-11-04 12:33:55.315606] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.981 [2024-11-04 12:33:55.315611] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.981 [2024-11-04 12:33:55.315621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.981 qpair failed and we were unable to recover it. 00:29:20.981 [2024-11-04 12:33:55.325544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.981 [2024-11-04 12:33:55.325624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.981 [2024-11-04 12:33:55.325634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.981 [2024-11-04 12:33:55.325639] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.981 [2024-11-04 12:33:55.325644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.981 [2024-11-04 12:33:55.325655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.981 qpair failed and we were unable to recover it. 00:29:20.981 [2024-11-04 12:33:55.335573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.981 [2024-11-04 12:33:55.335614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.981 [2024-11-04 12:33:55.335624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.981 [2024-11-04 12:33:55.335629] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.981 [2024-11-04 12:33:55.335634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.981 [2024-11-04 12:33:55.335647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.981 qpair failed and we were unable to recover it. 00:29:20.981 [2024-11-04 12:33:55.345610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.981 [2024-11-04 12:33:55.345654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.981 [2024-11-04 12:33:55.345664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.981 [2024-11-04 12:33:55.345669] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.981 [2024-11-04 12:33:55.345674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.981 [2024-11-04 12:33:55.345685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.981 qpair failed and we were unable to recover it. 00:29:20.981 [2024-11-04 12:33:55.355667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.981 [2024-11-04 12:33:55.355716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.981 [2024-11-04 12:33:55.355726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.981 [2024-11-04 12:33:55.355731] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.981 [2024-11-04 12:33:55.355736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.981 [2024-11-04 12:33:55.355749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.981 qpair failed and we were unable to recover it. 00:29:20.981 [2024-11-04 12:33:55.365647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.981 [2024-11-04 12:33:55.365687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.981 [2024-11-04 12:33:55.365697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.981 [2024-11-04 12:33:55.365702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.981 [2024-11-04 12:33:55.365707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.981 [2024-11-04 12:33:55.365717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.981 qpair failed and we were unable to recover it. 00:29:20.981 [2024-11-04 12:33:55.375725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.981 [2024-11-04 12:33:55.375768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.981 [2024-11-04 12:33:55.375779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.981 [2024-11-04 12:33:55.375784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.982 [2024-11-04 12:33:55.375789] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.982 [2024-11-04 12:33:55.375799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.982 qpair failed and we were unable to recover it. 00:29:20.982 [2024-11-04 12:33:55.385694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.982 [2024-11-04 12:33:55.385763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.982 [2024-11-04 12:33:55.385776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.982 [2024-11-04 12:33:55.385781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.982 [2024-11-04 12:33:55.385786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.982 [2024-11-04 12:33:55.385797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.982 qpair failed and we were unable to recover it. 00:29:20.982 [2024-11-04 12:33:55.395781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.982 [2024-11-04 12:33:55.395828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.982 [2024-11-04 12:33:55.395839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.982 [2024-11-04 12:33:55.395844] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.982 [2024-11-04 12:33:55.395848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.982 [2024-11-04 12:33:55.395859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.982 qpair failed and we were unable to recover it. 00:29:20.982 [2024-11-04 12:33:55.405748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.982 [2024-11-04 12:33:55.405799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.982 [2024-11-04 12:33:55.405808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.982 [2024-11-04 12:33:55.405814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.982 [2024-11-04 12:33:55.405818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.982 [2024-11-04 12:33:55.405829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.982 qpair failed and we were unable to recover it. 00:29:20.982 [2024-11-04 12:33:55.415776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.982 [2024-11-04 12:33:55.415824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.982 [2024-11-04 12:33:55.415834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.982 [2024-11-04 12:33:55.415840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.982 [2024-11-04 12:33:55.415844] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.982 [2024-11-04 12:33:55.415855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.982 qpair failed and we were unable to recover it. 00:29:20.982 [2024-11-04 12:33:55.425818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.982 [2024-11-04 12:33:55.425864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.982 [2024-11-04 12:33:55.425873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.982 [2024-11-04 12:33:55.425879] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.982 [2024-11-04 12:33:55.425883] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e38000b90 00:29:20.982 [2024-11-04 12:33:55.425896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.982 qpair failed and we were unable to recover it. 00:29:20.982 [2024-11-04 12:33:55.435875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.982 [2024-11-04 12:33:55.435931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.982 [2024-11-04 12:33:55.435959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.982 [2024-11-04 12:33:55.435968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.982 [2024-11-04 12:33:55.435975] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x146a180 00:29:20.982 [2024-11-04 12:33:55.435995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.982 qpair failed and we were unable to recover it. 00:29:20.982 [2024-11-04 12:33:55.445849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.982 [2024-11-04 12:33:55.445901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.982 [2024-11-04 12:33:55.445916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.982 [2024-11-04 12:33:55.445923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.982 [2024-11-04 12:33:55.445930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x146a180 00:29:20.982 [2024-11-04 12:33:55.445944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.982 qpair failed and we were unable to recover it. 00:29:20.982 [2024-11-04 12:33:55.455908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.982 [2024-11-04 12:33:55.456016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.982 [2024-11-04 12:33:55.456082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.982 [2024-11-04 12:33:55.456107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.982 [2024-11-04 12:33:55.456128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e40000b90 00:29:20.982 [2024-11-04 12:33:55.456185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.982 qpair failed and we were unable to recover it. 00:29:20.982 [2024-11-04 12:33:55.465913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.982 [2024-11-04 12:33:55.465984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.982 [2024-11-04 12:33:55.466014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.982 [2024-11-04 12:33:55.466031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.982 [2024-11-04 12:33:55.466045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e40000b90 00:29:20.982 [2024-11-04 12:33:55.466078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.982 qpair failed and we were unable to recover it. 00:29:20.982 [2024-11-04 12:33:55.476045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.982 [2024-11-04 12:33:55.476148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.982 [2024-11-04 12:33:55.476223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.982 [2024-11-04 12:33:55.476250] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.982 [2024-11-04 12:33:55.476271] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e34000b90 00:29:20.982 [2024-11-04 12:33:55.476325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:20.982 qpair failed and we were unable to recover it. 00:29:20.982 [2024-11-04 12:33:55.485882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.982 [2024-11-04 12:33:55.485981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.982 [2024-11-04 12:33:55.486012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.982 [2024-11-04 12:33:55.486027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.982 [2024-11-04 12:33:55.486042] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7e34000b90 00:29:20.982 [2024-11-04 12:33:55.486073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:20.982 qpair failed and we were unable to recover it. 00:29:20.982 [2024-11-04 12:33:55.486204] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:20.982 A controller has encountered a failure and is being reset. 00:29:20.982 Controller properly reset. 00:29:20.982 Initializing NVMe Controllers 00:29:20.982 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:20.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:20.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:20.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:20.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:20.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:20.982 Initialization complete. Launching workers. 00:29:20.982 Starting thread on core 1 00:29:20.982 Starting thread on core 2 00:29:20.982 Starting thread on core 3 00:29:20.982 Starting thread on core 0 00:29:20.982 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:20.982 00:29:20.982 real 0m11.463s 00:29:20.982 user 0m21.509s 00:29:20.982 sys 0m3.671s 00:29:20.982 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:20.982 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.982 ************************************ 00:29:20.982 END TEST nvmf_target_disconnect_tc2 00:29:20.982 ************************************ 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.244 rmmod nvme_tcp 00:29:21.244 rmmod nvme_fabrics 00:29:21.244 rmmod nvme_keyring 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 1829470 ']' 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 1829470 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1829470 ']' 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1829470 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1829470 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1829470' 00:29:21.244 killing process with pid 1829470 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1829470 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1829470 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:21.244 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:29:21.504 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:21.504 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:21.504 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.504 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.504 12:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.421 12:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:23.421 00:29:23.421 real 0m21.644s 00:29:23.421 user 0m49.554s 00:29:23.421 sys 0m9.611s 00:29:23.421 12:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:23.421 12:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:23.421 ************************************ 00:29:23.421 END TEST nvmf_target_disconnect 00:29:23.421 ************************************ 00:29:23.421 12:33:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:23.421 00:29:23.421 real 6m22.788s 00:29:23.421 user 11m11.602s 00:29:23.421 sys 2m9.057s 00:29:23.421 12:33:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:23.421 12:33:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.421 ************************************ 00:29:23.421 END TEST nvmf_host 00:29:23.421 ************************************ 00:29:23.421 12:33:57 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:23.421 12:33:57 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:23.421 12:33:57 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:23.421 12:33:57 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:23.421 12:33:57 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:23.421 12:33:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:23.684 ************************************ 00:29:23.684 START TEST nvmf_target_core_interrupt_mode 00:29:23.684 ************************************ 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:23.684 * Looking for test storage... 00:29:23.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:23.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.684 --rc genhtml_branch_coverage=1 00:29:23.684 --rc genhtml_function_coverage=1 00:29:23.684 --rc genhtml_legend=1 00:29:23.684 --rc geninfo_all_blocks=1 00:29:23.684 --rc geninfo_unexecuted_blocks=1 00:29:23.684 00:29:23.684 ' 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:23.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.684 --rc genhtml_branch_coverage=1 00:29:23.684 --rc genhtml_function_coverage=1 00:29:23.684 --rc genhtml_legend=1 00:29:23.684 --rc geninfo_all_blocks=1 00:29:23.684 --rc geninfo_unexecuted_blocks=1 00:29:23.684 00:29:23.684 ' 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:23.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.684 --rc genhtml_branch_coverage=1 00:29:23.684 --rc genhtml_function_coverage=1 00:29:23.684 --rc genhtml_legend=1 00:29:23.684 --rc geninfo_all_blocks=1 00:29:23.684 --rc geninfo_unexecuted_blocks=1 00:29:23.684 00:29:23.684 ' 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:23.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.684 --rc genhtml_branch_coverage=1 00:29:23.684 --rc genhtml_function_coverage=1 00:29:23.684 --rc genhtml_legend=1 00:29:23.684 --rc geninfo_all_blocks=1 00:29:23.684 --rc geninfo_unexecuted_blocks=1 00:29:23.684 00:29:23.684 ' 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.684 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.685 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.685 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.685 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.685 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.685 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.685 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.685 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:23.685 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:23.685 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.685 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.685 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.685 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.685 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.685 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:23.947 ************************************ 00:29:23.947 START TEST nvmf_abort 00:29:23.947 ************************************ 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:23.947 * Looking for test storage... 00:29:23.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:23.947 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:23.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.948 --rc genhtml_branch_coverage=1 00:29:23.948 --rc genhtml_function_coverage=1 00:29:23.948 --rc genhtml_legend=1 00:29:23.948 --rc geninfo_all_blocks=1 00:29:23.948 --rc geninfo_unexecuted_blocks=1 00:29:23.948 00:29:23.948 ' 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:23.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.948 --rc genhtml_branch_coverage=1 00:29:23.948 --rc genhtml_function_coverage=1 00:29:23.948 --rc genhtml_legend=1 00:29:23.948 --rc geninfo_all_blocks=1 00:29:23.948 --rc geninfo_unexecuted_blocks=1 00:29:23.948 00:29:23.948 ' 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:23.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.948 --rc genhtml_branch_coverage=1 00:29:23.948 --rc genhtml_function_coverage=1 00:29:23.948 --rc genhtml_legend=1 00:29:23.948 --rc geninfo_all_blocks=1 00:29:23.948 --rc geninfo_unexecuted_blocks=1 00:29:23.948 00:29:23.948 ' 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:23.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.948 --rc genhtml_branch_coverage=1 00:29:23.948 --rc genhtml_function_coverage=1 00:29:23.948 --rc genhtml_legend=1 00:29:23.948 --rc geninfo_all_blocks=1 00:29:23.948 --rc geninfo_unexecuted_blocks=1 00:29:23.948 00:29:23.948 ' 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:23.948 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:24.210 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:24.210 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:24.210 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:24.210 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:24.210 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.210 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:24.210 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:24.210 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:24.210 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.210 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.210 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.210 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:24.210 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:24.210 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:24.210 12:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:32.360 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:32.360 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:32.360 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:32.360 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:32.360 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:32.360 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:32.360 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:32.360 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:32.360 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:32.360 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:32.360 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:32.360 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:32.360 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:32.361 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:32.361 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:32.361 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:32.361 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:32.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:29:32.361 00:29:32.361 --- 10.0.0.2 ping statistics --- 00:29:32.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.361 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:32.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:29:32.361 00:29:32.361 --- 10.0.0.1 ping statistics --- 00:29:32.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.361 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:32.361 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:32.362 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:32.362 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:32.362 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:32.362 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:32.362 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:32.362 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1835087 00:29:32.362 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1835087 00:29:32.362 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:32.362 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1835087 ']' 00:29:32.362 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.362 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:32.362 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.362 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:32.362 12:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:32.362 [2024-11-04 12:34:05.808188] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:32.362 [2024-11-04 12:34:05.809327] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:29:32.362 [2024-11-04 12:34:05.809379] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.362 [2024-11-04 12:34:05.898339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:32.362 [2024-11-04 12:34:05.949978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.362 [2024-11-04 12:34:05.950026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.362 [2024-11-04 12:34:05.950034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.362 [2024-11-04 12:34:05.950041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.362 [2024-11-04 12:34:05.950047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.362 [2024-11-04 12:34:05.951790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:32.362 [2024-11-04 12:34:05.952020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:32.362 [2024-11-04 12:34:05.952023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.362 [2024-11-04 12:34:06.027673] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:32.362 [2024-11-04 12:34:06.027740] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:32.362 [2024-11-04 12:34:06.028470] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:32.362 [2024-11-04 12:34:06.028709] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:32.362 [2024-11-04 12:34:06.673069] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:32.362 Malloc0 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:32.362 Delay0 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:32.362 [2024-11-04 12:34:06.769039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.362 12:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:32.362 [2024-11-04 12:34:06.884472] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:34.911 Initializing NVMe Controllers 00:29:34.911 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:34.911 controller IO queue size 128 less than required 00:29:34.911 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:34.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:34.911 Initialization complete. Launching workers. 00:29:34.911 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29059 00:29:34.911 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29116, failed to submit 66 00:29:34.911 success 29059, unsuccessful 57, failed 0 00:29:34.911 12:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:34.911 12:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.911 12:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:34.911 12:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.911 12:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:34.911 12:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:34.911 12:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:34.911 12:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:34.911 12:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:34.911 12:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:34.911 12:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:34.911 12:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:34.911 rmmod nvme_tcp 00:29:34.911 rmmod nvme_fabrics 00:29:34.911 rmmod nvme_keyring 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1835087 ']' 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1835087 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1835087 ']' 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1835087 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1835087 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1835087' 00:29:34.911 killing process with pid 1835087 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1835087 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1835087 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.911 12:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.825 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:36.825 00:29:36.826 real 0m13.061s 00:29:36.826 user 0m10.812s 00:29:36.826 sys 0m6.673s 00:29:36.826 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:36.826 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:36.826 ************************************ 00:29:36.826 END TEST nvmf_abort 00:29:36.826 ************************************ 00:29:37.087 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:37.087 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:37.087 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:37.087 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:37.087 ************************************ 00:29:37.087 START TEST nvmf_ns_hotplug_stress 00:29:37.087 ************************************ 00:29:37.087 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:37.087 * Looking for test storage... 00:29:37.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:37.087 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:37.087 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:29:37.087 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:37.087 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:37.087 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:37.087 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:37.087 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:37.087 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:37.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.088 --rc genhtml_branch_coverage=1 00:29:37.088 --rc genhtml_function_coverage=1 00:29:37.088 --rc genhtml_legend=1 00:29:37.088 --rc geninfo_all_blocks=1 00:29:37.088 --rc geninfo_unexecuted_blocks=1 00:29:37.088 00:29:37.088 ' 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:37.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.088 --rc genhtml_branch_coverage=1 00:29:37.088 --rc genhtml_function_coverage=1 00:29:37.088 --rc genhtml_legend=1 00:29:37.088 --rc geninfo_all_blocks=1 00:29:37.088 --rc geninfo_unexecuted_blocks=1 00:29:37.088 00:29:37.088 ' 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:37.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.088 --rc genhtml_branch_coverage=1 00:29:37.088 --rc genhtml_function_coverage=1 00:29:37.088 --rc genhtml_legend=1 00:29:37.088 --rc geninfo_all_blocks=1 00:29:37.088 --rc geninfo_unexecuted_blocks=1 00:29:37.088 00:29:37.088 ' 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:37.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.088 --rc genhtml_branch_coverage=1 00:29:37.088 --rc genhtml_function_coverage=1 00:29:37.088 --rc genhtml_legend=1 00:29:37.088 --rc geninfo_all_blocks=1 00:29:37.088 --rc geninfo_unexecuted_blocks=1 00:29:37.088 00:29:37.088 ' 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:37.088 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:37.350 12:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:44.095 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.095 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:44.096 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:44.096 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:44.096 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:44.096 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:44.357 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:44.357 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:44.357 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:44.357 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:44.357 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:44.357 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:44.357 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:44.357 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:44.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:44.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:29:44.357 00:29:44.357 --- 10.0.0.2 ping statistics --- 00:29:44.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.357 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:29:44.357 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:44.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:44.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:29:44.357 00:29:44.357 --- 10.0.0.1 ping statistics --- 00:29:44.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.357 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:29:44.357 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:44.357 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:29:44.357 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:44.357 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:44.357 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:44.357 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:44.357 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:44.357 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:44.357 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:44.619 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:44.619 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:44.619 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:44.619 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:44.619 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1839773 00:29:44.619 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1839773 00:29:44.619 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:44.619 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1839773 ']' 00:29:44.619 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.619 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:44.619 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.619 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:44.619 12:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:44.619 [2024-11-04 12:34:19.006221] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:44.619 [2024-11-04 12:34:19.007379] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:29:44.619 [2024-11-04 12:34:19.007439] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.619 [2024-11-04 12:34:19.094419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:44.619 [2024-11-04 12:34:19.146363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:44.619 [2024-11-04 12:34:19.146415] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:44.619 [2024-11-04 12:34:19.146423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:44.619 [2024-11-04 12:34:19.146431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:44.619 [2024-11-04 12:34:19.146438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:44.620 [2024-11-04 12:34:19.148193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:44.620 [2024-11-04 12:34:19.148359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.620 [2024-11-04 12:34:19.148360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:44.880 [2024-11-04 12:34:19.224936] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:44.880 [2024-11-04 12:34:19.225010] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:44.880 [2024-11-04 12:34:19.225661] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:44.880 [2024-11-04 12:34:19.225941] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:45.453 12:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:45.453 12:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:29:45.453 12:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:45.453 12:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:45.453 12:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:45.453 12:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:45.453 12:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:45.453 12:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:45.715 [2024-11-04 12:34:20.033262] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.715 12:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:45.715 12:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:45.976 [2024-11-04 12:34:20.393770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:45.976 12:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:46.238 12:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:46.238 Malloc0 00:29:46.238 12:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:46.498 Delay0 00:29:46.498 12:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.759 12:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:46.759 NULL1 00:29:46.759 12:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:47.020 12:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1840282 00:29:47.020 12:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:47.020 12:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:47.020 12:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:47.281 12:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:47.281 12:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:47.281 12:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:47.541 true 00:29:47.541 12:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:47.541 12:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:47.802 12:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:48.063 12:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:48.063 12:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:48.063 true 00:29:48.063 12:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:48.063 12:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:48.324 12:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:48.585 12:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:48.585 12:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:48.585 true 00:29:48.585 12:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:48.585 12:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:48.845 12:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.106 12:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:49.106 12:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:49.106 true 00:29:49.106 12:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:49.106 12:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.366 12:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.626 12:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:49.626 12:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:49.886 true 00:29:49.886 12:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:49.886 12:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.886 12:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.146 12:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:50.146 12:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:50.407 true 00:29:50.407 12:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:50.407 12:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:50.407 12:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.668 12:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:50.668 12:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:50.928 true 00:29:50.928 12:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:50.929 12:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.188 12:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.188 12:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:51.188 12:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:51.449 true 00:29:51.449 12:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:51.449 12:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.709 12:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.709 12:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:51.709 12:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:51.970 true 00:29:51.970 12:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:51.970 12:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.230 12:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.491 12:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:52.491 12:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:52.491 true 00:29:52.491 12:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:52.491 12:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.752 12:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.013 12:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:53.013 12:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:53.013 true 00:29:53.013 12:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:53.013 12:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.273 12:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.535 12:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:53.535 12:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:53.535 true 00:29:53.535 12:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:53.535 12:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.796 12:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.057 12:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:54.057 12:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:54.057 true 00:29:54.057 12:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:54.057 12:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.318 12:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.580 12:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:54.580 12:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:54.840 true 00:29:54.840 12:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:54.840 12:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.840 12:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.101 12:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:55.101 12:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:55.363 true 00:29:55.363 12:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:55.363 12:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.363 12:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.623 12:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:55.623 12:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:55.884 true 00:29:55.884 12:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:55.884 12:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.145 12:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.145 12:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:56.145 12:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:56.406 true 00:29:56.406 12:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:56.406 12:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.667 12:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.667 12:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:56.667 12:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:56.928 true 00:29:56.928 12:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:56.928 12:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.189 12:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.189 12:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:57.189 12:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:57.449 true 00:29:57.449 12:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:57.449 12:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.709 12:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.969 12:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:57.969 12:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:57.969 true 00:29:57.969 12:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:57.969 12:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.230 12:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.490 12:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:58.490 12:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:58.490 true 00:29:58.490 12:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:58.490 12:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.789 12:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.050 12:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:59.050 12:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:59.050 true 00:29:59.050 12:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:59.050 12:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.311 12:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.571 12:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:59.571 12:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:59.571 true 00:29:59.571 12:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:29:59.571 12:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.832 12:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:00.092 12:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:00.092 12:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:00.092 true 00:30:00.352 12:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:00.352 12:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.352 12:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:00.613 12:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:00.614 12:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:00.874 true 00:30:00.874 12:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:00.874 12:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.874 12:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:01.135 12:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:01.135 12:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:01.396 true 00:30:01.396 12:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:01.396 12:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.396 12:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:01.657 12:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:01.657 12:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:01.918 true 00:30:01.918 12:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:01.918 12:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.179 12:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.179 12:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:02.179 12:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:02.439 true 00:30:02.439 12:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:02.439 12:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.700 12:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.700 12:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:02.700 12:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:02.961 true 00:30:02.961 12:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:02.961 12:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.221 12:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.221 12:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:03.221 12:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:03.483 true 00:30:03.483 12:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:03.483 12:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.743 12:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.004 12:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:04.004 12:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:04.004 true 00:30:04.004 12:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:04.004 12:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.265 12:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.527 12:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:04.527 12:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:04.527 true 00:30:04.527 12:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:04.527 12:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.789 12:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.050 12:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:05.050 12:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:05.050 true 00:30:05.050 12:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:05.050 12:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.310 12:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.571 12:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:05.571 12:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:05.571 true 00:30:05.571 12:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:05.571 12:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.832 12:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.093 12:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:06.093 12:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:06.093 true 00:30:06.354 12:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:06.354 12:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.354 12:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.615 12:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:30:06.615 12:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:30:06.876 true 00:30:06.876 12:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:06.876 12:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.876 12:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.137 12:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:30:07.137 12:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:30:07.399 true 00:30:07.399 12:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:07.399 12:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.660 12:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.660 12:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:30:07.660 12:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:30:07.920 true 00:30:07.920 12:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:07.920 12:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.181 12:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.181 12:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:30:08.181 12:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:30:08.442 true 00:30:08.442 12:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:08.442 12:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.702 12:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.702 12:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:30:08.702 12:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:30:08.963 true 00:30:08.963 12:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:08.963 12:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.223 12:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.223 12:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:30:09.483 12:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:30:09.483 true 00:30:09.483 12:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:09.483 12:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.745 12:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.007 12:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:30:10.007 12:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:30:10.007 true 00:30:10.007 12:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:10.007 12:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.269 12:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.529 12:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:30:10.529 12:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:30:10.529 true 00:30:10.529 12:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:10.529 12:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.789 12:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.051 12:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:30:11.051 12:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:30:11.051 true 00:30:11.051 12:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:11.051 12:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.311 12:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.572 12:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:30:11.572 12:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:30:11.572 true 00:30:11.833 12:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:11.833 12:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.833 12:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.095 12:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:30:12.095 12:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:30:12.355 true 00:30:12.355 12:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:12.355 12:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.356 12:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.617 12:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:30:12.617 12:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:30:12.878 true 00:30:12.878 12:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:12.878 12:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.878 12:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.139 12:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:30:13.139 12:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:30:13.400 true 00:30:13.401 12:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:13.401 12:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.662 12:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.662 12:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:30:13.662 12:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:30:13.924 true 00:30:13.924 12:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:13.924 12:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.184 12:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.184 12:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:30:14.184 12:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:30:14.446 true 00:30:14.446 12:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:14.446 12:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.706 12:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.706 12:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:30:14.706 12:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:30:14.967 true 00:30:14.967 12:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:14.967 12:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.229 12:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.492 12:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:30:15.492 12:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:30:15.492 true 00:30:15.492 12:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:15.492 12:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.753 12:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.013 12:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:30:16.014 12:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:30:16.014 true 00:30:16.014 12:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:16.014 12:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.275 12:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.536 12:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:30:16.536 12:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:30:16.536 true 00:30:16.536 12:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:16.536 12:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.798 12:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.059 12:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:30:17.059 12:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:30:17.059 true 00:30:17.320 12:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:17.320 12:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.320 Initializing NVMe Controllers 00:30:17.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:17.320 Controller IO queue size 128, less than required. 00:30:17.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:17.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:17.320 Initialization complete. Launching workers. 00:30:17.320 ======================================================== 00:30:17.320 Latency(us) 00:30:17.320 Device Information : IOPS MiB/s Average min max 00:30:17.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30214.83 14.75 4236.21 1480.58 10913.94 00:30:17.320 ======================================================== 00:30:17.320 Total : 30214.83 14.75 4236.21 1480.58 10913.94 00:30:17.320 00:30:17.320 12:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.580 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:30:17.580 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:30:17.840 true 00:30:17.840 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1840282 00:30:17.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1840282) - No such process 00:30:17.840 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1840282 00:30:17.840 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.840 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:18.100 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:18.100 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:18.101 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:18.101 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:18.101 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:18.101 null0 00:30:18.361 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:18.361 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:18.361 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:18.361 null1 00:30:18.361 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:18.361 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:18.361 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:18.622 null2 00:30:18.622 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:18.622 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:18.622 12:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:18.622 null3 00:30:18.622 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:18.622 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:18.622 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:18.883 null4 00:30:18.883 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:18.883 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:18.883 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:19.145 null5 00:30:19.145 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:19.145 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:19.145 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:19.145 null6 00:30:19.145 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:19.145 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:19.145 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:19.408 null7 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1846634 1846635 1846638 1846639 1846641 1846643 1846645 1846647 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.408 12:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:19.670 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:19.670 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:19.670 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:19.670 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.670 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:19.670 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:19.670 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:19.670 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:19.670 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.670 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.670 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:19.670 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.670 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.670 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:19.931 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.931 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.931 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:19.931 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.931 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.931 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:19.931 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.931 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.932 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:19.932 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.932 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.932 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:19.932 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.932 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.932 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:19.932 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.932 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.932 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:19.932 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:19.932 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:19.932 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:19.932 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.194 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:20.195 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:20.195 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.195 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.195 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:20.456 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:20.456 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:20.456 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:20.456 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.456 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:20.456 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.456 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.456 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:20.456 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:20.456 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:20.456 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.456 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.456 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:20.456 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.456 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.456 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:20.456 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.456 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.456 12:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.718 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.979 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:21.240 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:21.240 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:21.240 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:21.240 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:21.240 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:21.240 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.240 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:21.240 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:21.240 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:21.240 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:21.240 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:21.240 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:21.240 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:21.241 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:21.241 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:21.241 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:21.241 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:21.241 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:21.503 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:21.503 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:21.503 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:21.503 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:21.503 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:21.503 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:21.503 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:21.503 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:21.503 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:21.503 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:21.503 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:21.503 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:21.503 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:21.503 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:21.503 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:21.503 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:21.503 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:21.503 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:21.503 12:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:21.503 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:21.503 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:21.503 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:21.765 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.027 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:22.289 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:22.289 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.289 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.289 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:22.289 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.289 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.289 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:22.289 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.289 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.289 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:22.289 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:22.289 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:22.289 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:22.289 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:22.290 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.290 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.290 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:22.290 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.290 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.290 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.290 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:22.290 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:22.551 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.551 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.551 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:22.551 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:22.551 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.551 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.551 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:22.551 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.551 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.551 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:22.551 12:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:22.551 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.551 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.551 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:22.551 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.552 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.552 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:22.552 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:22.552 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:22.552 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.552 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.552 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:22.552 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:22.814 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:23.076 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:23.076 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:23.076 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:23.076 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:23.076 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:23.076 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:23.076 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:23.076 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.076 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:23.076 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:23.076 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:23.076 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:23.076 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:23.076 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:23.076 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:23.337 rmmod nvme_tcp 00:30:23.337 rmmod nvme_fabrics 00:30:23.337 rmmod nvme_keyring 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1839773 ']' 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1839773 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1839773 ']' 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1839773 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1839773 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1839773' 00:30:23.337 killing process with pid 1839773 00:30:23.337 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1839773 00:30:23.338 12:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1839773 00:30:23.598 12:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:23.598 12:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:23.598 12:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:23.598 12:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:23.598 12:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:30:23.598 12:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:30:23.598 12:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:23.598 12:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:23.598 12:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:23.598 12:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.598 12:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.598 12:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.147 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:26.147 00:30:26.147 real 0m48.650s 00:30:26.147 user 3m3.503s 00:30:26.147 sys 0m22.701s 00:30:26.147 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:26.148 ************************************ 00:30:26.148 END TEST nvmf_ns_hotplug_stress 00:30:26.148 ************************************ 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:26.148 ************************************ 00:30:26.148 START TEST nvmf_delete_subsystem 00:30:26.148 ************************************ 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:26.148 * Looking for test storage... 00:30:26.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:26.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.148 --rc genhtml_branch_coverage=1 00:30:26.148 --rc genhtml_function_coverage=1 00:30:26.148 --rc genhtml_legend=1 00:30:26.148 --rc geninfo_all_blocks=1 00:30:26.148 --rc geninfo_unexecuted_blocks=1 00:30:26.148 00:30:26.148 ' 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:26.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.148 --rc genhtml_branch_coverage=1 00:30:26.148 --rc genhtml_function_coverage=1 00:30:26.148 --rc genhtml_legend=1 00:30:26.148 --rc geninfo_all_blocks=1 00:30:26.148 --rc geninfo_unexecuted_blocks=1 00:30:26.148 00:30:26.148 ' 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:26.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.148 --rc genhtml_branch_coverage=1 00:30:26.148 --rc genhtml_function_coverage=1 00:30:26.148 --rc genhtml_legend=1 00:30:26.148 --rc geninfo_all_blocks=1 00:30:26.148 --rc geninfo_unexecuted_blocks=1 00:30:26.148 00:30:26.148 ' 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:26.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.148 --rc genhtml_branch_coverage=1 00:30:26.148 --rc genhtml_function_coverage=1 00:30:26.148 --rc genhtml_legend=1 00:30:26.148 --rc geninfo_all_blocks=1 00:30:26.148 --rc geninfo_unexecuted_blocks=1 00:30:26.148 00:30:26.148 ' 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.148 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:26.149 12:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:32.740 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:32.740 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:32.740 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:32.740 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:32.740 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:32.741 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:32.741 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:32.741 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.741 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:32.741 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:32.741 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:32.741 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:33.003 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:33.003 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:33.003 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:33.003 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:33.003 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:33.003 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:33.003 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:33.003 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:33.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:33.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:30:33.003 00:30:33.003 --- 10.0.0.2 ping statistics --- 00:30:33.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.003 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:30:33.003 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:33.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:33.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:30:33.003 00:30:33.003 --- 10.0.0.1 ping statistics --- 00:30:33.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.003 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:30:33.003 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:33.003 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:30:33.003 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:33.003 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:33.003 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:33.003 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:33.003 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:33.003 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:33.003 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:33.264 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:33.264 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:33.264 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:33.264 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:33.264 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1851644 00:30:33.264 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1851644 00:30:33.264 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:33.264 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1851644 ']' 00:30:33.264 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.264 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:33.264 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.264 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:33.264 12:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:33.264 [2024-11-04 12:35:07.635482] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:33.264 [2024-11-04 12:35:07.636636] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:30:33.264 [2024-11-04 12:35:07.636690] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:33.264 [2024-11-04 12:35:07.708276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:33.264 [2024-11-04 12:35:07.750541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:33.264 [2024-11-04 12:35:07.750580] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:33.264 [2024-11-04 12:35:07.750588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:33.264 [2024-11-04 12:35:07.750594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:33.264 [2024-11-04 12:35:07.750600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:33.264 [2024-11-04 12:35:07.751869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.264 [2024-11-04 12:35:07.751888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.264 [2024-11-04 12:35:07.807768] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:33.264 [2024-11-04 12:35:07.808299] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:33.264 [2024-11-04 12:35:07.808632] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:34.206 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:34.206 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:34.207 [2024-11-04 12:35:08.476485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:34.207 [2024-11-04 12:35:08.505139] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:34.207 NULL1 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:34.207 Delay0 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1851822 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:34.207 12:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:34.207 [2024-11-04 12:35:08.595090] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:36.121 12:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:36.121 12:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.121 12:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:36.382 Write completed with error (sct=0, sc=8) 00:30:36.382 Read completed with error (sct=0, sc=8) 00:30:36.382 Read completed with error (sct=0, sc=8) 00:30:36.382 Read completed with error (sct=0, sc=8) 00:30:36.382 starting I/O failed: -6 00:30:36.382 Read completed with error (sct=0, sc=8) 00:30:36.382 Read completed with error (sct=0, sc=8) 00:30:36.382 Read completed with error (sct=0, sc=8) 00:30:36.382 Read completed with error (sct=0, sc=8) 00:30:36.382 starting I/O failed: -6 00:30:36.382 Read completed with error (sct=0, sc=8) 00:30:36.382 Write completed with error (sct=0, sc=8) 00:30:36.382 Read completed with error (sct=0, sc=8) 00:30:36.382 Read completed with error (sct=0, sc=8) 00:30:36.382 starting I/O failed: -6 00:30:36.382 Write completed with error (sct=0, sc=8) 00:30:36.382 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 starting I/O failed: -6 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 starting I/O failed: -6 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 starting I/O failed: -6 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 starting I/O failed: -6 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 starting I/O failed: -6 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 starting I/O failed: -6 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 starting I/O failed: -6 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 starting I/O failed: -6 00:30:36.383 starting I/O failed: -6 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 [2024-11-04 12:35:10.881732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf3390 is same with the state(6) to be set 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 starting I/O failed: -6 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 starting I/O failed: -6 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 starting I/O failed: -6 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 starting I/O failed: -6 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 starting I/O failed: -6 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 starting I/O failed: -6 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 starting I/O failed: -6 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 starting I/O failed: -6 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 starting I/O failed: -6 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 starting I/O failed: -6 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 starting I/O failed: -6 00:30:36.383 [2024-11-04 12:35:10.886024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4be8000c00 is same with the state(6) to be set 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Write completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:36.383 Read completed with error (sct=0, sc=8) 00:30:37.327 [2024-11-04 12:35:11.859706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4a70 is same with the state(6) to be set 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 [2024-11-04 12:35:11.885022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf3930 is same with the state(6) to be set 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 [2024-11-04 12:35:11.885220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf3570 is same with the state(6) to be set 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 [2024-11-04 12:35:11.888191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4be800cfe0 is same with the state(6) to be set 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 Write completed with error (sct=0, sc=8) 00:30:37.327 Read completed with error (sct=0, sc=8) 00:30:37.327 [2024-11-04 12:35:11.888578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4be800d780 is same with the state(6) to be set 00:30:37.327 Initializing NVMe Controllers 00:30:37.327 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:37.327 Controller IO queue size 128, less than required. 00:30:37.327 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:37.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:37.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:37.327 Initialization complete. Launching workers. 00:30:37.327 ======================================================== 00:30:37.327 Latency(us) 00:30:37.327 Device Information : IOPS MiB/s Average min max 00:30:37.327 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.28 0.08 899890.37 262.14 1007102.82 00:30:37.327 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.30 0.08 907754.46 330.04 1011405.24 00:30:37.327 ======================================================== 00:30:37.327 Total : 332.58 0.16 903775.32 262.14 1011405.24 00:30:37.327 00:30:37.327 [2024-11-04 12:35:11.889206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf4a70 (9): Bad file descriptor 00:30:37.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:37.327 12:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.327 12:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:37.327 12:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1851822 00:30:37.327 12:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1851822 00:30:37.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1851822) - No such process 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1851822 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1851822 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1851822 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:37.899 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.900 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:37.900 [2024-11-04 12:35:12.424885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:37.900 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.900 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.900 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.900 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:37.900 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.900 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1852496 00:30:37.900 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:37.900 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:37.900 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852496 00:30:37.900 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:38.161 [2024-11-04 12:35:12.489391] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:38.422 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:38.422 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852496 00:30:38.422 12:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:38.993 12:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:38.993 12:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852496 00:30:38.993 12:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:39.565 12:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:39.565 12:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852496 00:30:39.565 12:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:40.137 12:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:40.137 12:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852496 00:30:40.137 12:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:40.708 12:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:40.708 12:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852496 00:30:40.708 12:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:40.969 12:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:40.969 12:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852496 00:30:40.969 12:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:41.230 Initializing NVMe Controllers 00:30:41.230 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:41.230 Controller IO queue size 128, less than required. 00:30:41.230 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:41.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:41.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:41.230 Initialization complete. Launching workers. 00:30:41.230 ======================================================== 00:30:41.230 Latency(us) 00:30:41.230 Device Information : IOPS MiB/s Average min max 00:30:41.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002290.98 1000165.35 1005526.63 00:30:41.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003984.40 1000338.48 1010101.98 00:30:41.230 ======================================================== 00:30:41.230 Total : 256.00 0.12 1003137.69 1000165.35 1010101.98 00:30:41.230 00:30:41.492 12:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:41.492 12:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1852496 00:30:41.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1852496) - No such process 00:30:41.492 12:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1852496 00:30:41.492 12:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:41.492 12:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:41.492 12:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:41.492 12:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:41.492 12:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:41.492 12:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:41.492 12:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:41.492 12:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:41.492 rmmod nvme_tcp 00:30:41.492 rmmod nvme_fabrics 00:30:41.492 rmmod nvme_keyring 00:30:41.492 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:41.492 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:41.492 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:41.492 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1851644 ']' 00:30:41.492 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1851644 00:30:41.492 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1851644 ']' 00:30:41.492 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1851644 00:30:41.492 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:30:41.754 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:41.754 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1851644 00:30:41.754 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:41.754 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:41.754 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1851644' 00:30:41.754 killing process with pid 1851644 00:30:41.754 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1851644 00:30:41.754 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1851644 00:30:41.754 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:41.754 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:41.754 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:41.754 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:41.754 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:30:41.754 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:41.754 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:30:41.754 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:41.754 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:41.754 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.754 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.754 12:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.302 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:44.303 00:30:44.303 real 0m18.159s 00:30:44.303 user 0m26.555s 00:30:44.303 sys 0m7.359s 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:44.303 ************************************ 00:30:44.303 END TEST nvmf_delete_subsystem 00:30:44.303 ************************************ 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:44.303 ************************************ 00:30:44.303 START TEST nvmf_host_management 00:30:44.303 ************************************ 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:44.303 * Looking for test storage... 00:30:44.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:44.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.303 --rc genhtml_branch_coverage=1 00:30:44.303 --rc genhtml_function_coverage=1 00:30:44.303 --rc genhtml_legend=1 00:30:44.303 --rc geninfo_all_blocks=1 00:30:44.303 --rc geninfo_unexecuted_blocks=1 00:30:44.303 00:30:44.303 ' 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:44.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.303 --rc genhtml_branch_coverage=1 00:30:44.303 --rc genhtml_function_coverage=1 00:30:44.303 --rc genhtml_legend=1 00:30:44.303 --rc geninfo_all_blocks=1 00:30:44.303 --rc geninfo_unexecuted_blocks=1 00:30:44.303 00:30:44.303 ' 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:44.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.303 --rc genhtml_branch_coverage=1 00:30:44.303 --rc genhtml_function_coverage=1 00:30:44.303 --rc genhtml_legend=1 00:30:44.303 --rc geninfo_all_blocks=1 00:30:44.303 --rc geninfo_unexecuted_blocks=1 00:30:44.303 00:30:44.303 ' 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:44.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.303 --rc genhtml_branch_coverage=1 00:30:44.303 --rc genhtml_function_coverage=1 00:30:44.303 --rc genhtml_legend=1 00:30:44.303 --rc geninfo_all_blocks=1 00:30:44.303 --rc geninfo_unexecuted_blocks=1 00:30:44.303 00:30:44.303 ' 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.303 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:44.304 12:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:52.460 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:52.460 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.460 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:52.461 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:52.461 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:52.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:52.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:30:52.461 00:30:52.461 --- 10.0.0.2 ping statistics --- 00:30:52.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.461 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:52.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:52.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:30:52.461 00:30:52.461 --- 10.0.0.1 ping statistics --- 00:30:52.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.461 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1857485 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1857485 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1857485 ']' 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:52.461 12:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:52.461 [2024-11-04 12:35:26.019684] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:52.461 [2024-11-04 12:35:26.020855] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:30:52.461 [2024-11-04 12:35:26.020905] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:52.461 [2024-11-04 12:35:26.109565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:52.461 [2024-11-04 12:35:26.163167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:52.461 [2024-11-04 12:35:26.163219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:52.461 [2024-11-04 12:35:26.163228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:52.461 [2024-11-04 12:35:26.163236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:52.461 [2024-11-04 12:35:26.163242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:52.461 [2024-11-04 12:35:26.165185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:52.461 [2024-11-04 12:35:26.165354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:52.461 [2024-11-04 12:35:26.165481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.461 [2024-11-04 12:35:26.165482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:52.461 [2024-11-04 12:35:26.239805] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:52.461 [2024-11-04 12:35:26.240410] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:52.461 [2024-11-04 12:35:26.241433] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:52.461 [2024-11-04 12:35:26.241520] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:52.461 [2024-11-04 12:35:26.241690] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:52.461 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:52.461 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:30:52.461 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:52.462 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:52.462 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:52.462 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.462 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:52.462 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.462 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:52.462 [2024-11-04 12:35:26.874338] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.462 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.462 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:52.462 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:52.462 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:52.462 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:52.462 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:52.462 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:52.462 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.462 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:52.462 Malloc0 00:30:52.462 [2024-11-04 12:35:26.966558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.462 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.462 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:52.462 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:52.462 12:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:52.462 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1857709 00:30:52.462 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1857709 /var/tmp/bdevperf.sock 00:30:52.462 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1857709 ']' 00:30:52.462 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:52.462 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:52.462 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:52.462 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:52.462 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:52.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:52.462 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:52.462 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:30:52.462 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:52.462 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:30:52.462 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:52.462 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:52.462 { 00:30:52.462 "params": { 00:30:52.462 "name": "Nvme$subsystem", 00:30:52.462 "trtype": "$TEST_TRANSPORT", 00:30:52.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:52.462 "adrfam": "ipv4", 00:30:52.462 "trsvcid": "$NVMF_PORT", 00:30:52.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:52.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:52.462 "hdgst": ${hdgst:-false}, 00:30:52.462 "ddgst": ${ddgst:-false} 00:30:52.462 }, 00:30:52.462 "method": "bdev_nvme_attach_controller" 00:30:52.462 } 00:30:52.462 EOF 00:30:52.462 )") 00:30:52.749 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:30:52.749 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:30:52.749 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:30:52.749 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:52.749 "params": { 00:30:52.749 "name": "Nvme0", 00:30:52.749 "trtype": "tcp", 00:30:52.749 "traddr": "10.0.0.2", 00:30:52.749 "adrfam": "ipv4", 00:30:52.749 "trsvcid": "4420", 00:30:52.749 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:52.749 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:52.749 "hdgst": false, 00:30:52.749 "ddgst": false 00:30:52.749 }, 00:30:52.749 "method": "bdev_nvme_attach_controller" 00:30:52.749 }' 00:30:52.749 [2024-11-04 12:35:27.071407] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:30:52.749 [2024-11-04 12:35:27.071465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857709 ] 00:30:52.749 [2024-11-04 12:35:27.131775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.749 [2024-11-04 12:35:27.168440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.033 Running I/O for 10 seconds... 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:30:53.033 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:30:53.351 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:30:53.352 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:53.352 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:53.352 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:53.352 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.352 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:53.352 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.352 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=532 00:30:53.352 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 532 -ge 100 ']' 00:30:53.352 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:53.352 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:53.352 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:53.352 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:53.352 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.352 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:53.352 [2024-11-04 12:35:27.769974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05360 is same with the state(6) to be set 00:30:53.352 [2024-11-04 12:35:27.770015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05360 is same with the state(6) to be set 00:30:53.352 [2024-11-04 12:35:27.770025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05360 is same with the state(6) to be set 00:30:53.352 [2024-11-04 12:35:27.770034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05360 is same with the state(6) to be set 00:30:53.352 [2024-11-04 12:35:27.770043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05360 is same with the state(6) to be set 00:30:53.352 [2024-11-04 12:35:27.770050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05360 is same with the state(6) to be set 00:30:53.352 [2024-11-04 12:35:27.770057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05360 is same with the state(6) to be set 00:30:53.352 [2024-11-04 12:35:27.770064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05360 is same with the state(6) to be set 00:30:53.352 [2024-11-04 12:35:27.770070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05360 is same with the state(6) to be set 00:30:53.352 [2024-11-04 12:35:27.770077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05360 is same with the state(6) to be set 00:30:53.352 [2024-11-04 12:35:27.770090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05360 is same with the state(6) to be set 00:30:53.352 [2024-11-04 12:35:27.770097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb05360 is same with the state(6) to be set 00:30:53.352 [2024-11-04 12:35:27.772440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.352 [2024-11-04 12:35:27.772477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.772487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.352 [2024-11-04 12:35:27.772495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.772504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.352 [2024-11-04 12:35:27.772511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.772519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.352 [2024-11-04 12:35:27.772527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.772534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234d0c0 is same with the state(6) to be set 00:30:53.352 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.352 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:53.352 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.352 [2024-11-04 12:35:27.775906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.775929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.775944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.775952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.775962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.775970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.775979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.775987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.775996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.776003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:53.352 [2024-11-04 12:35:27.776013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.776026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.776036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.776044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.776053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.776061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.776070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.776078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.776087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.776095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.776104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.776112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.776121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.776128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.776138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.776145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.776154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.776161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.776171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.776179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.776188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.776196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.776205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.776213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.776222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.776230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.776242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.776249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.776259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.776266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.776276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.352 [2024-11-04 12:35:27.776283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.352 [2024-11-04 12:35:27.776293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.353 [2024-11-04 12:35:27.776762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.353 [2024-11-04 12:35:27.776772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.354 [2024-11-04 12:35:27.776779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.354 [2024-11-04 12:35:27.776789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.354 [2024-11-04 12:35:27.776796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.354 [2024-11-04 12:35:27.776806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.354 [2024-11-04 12:35:27.776813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.354 [2024-11-04 12:35:27.776822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.354 [2024-11-04 12:35:27.776830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.354 [2024-11-04 12:35:27.776839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.354 [2024-11-04 12:35:27.776846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.354 [2024-11-04 12:35:27.776855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.354 [2024-11-04 12:35:27.776863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.354 [2024-11-04 12:35:27.776872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.354 [2024-11-04 12:35:27.776881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.354 [2024-11-04 12:35:27.776890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.354 [2024-11-04 12:35:27.776898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.354 [2024-11-04 12:35:27.776907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.354 [2024-11-04 12:35:27.776914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.354 [2024-11-04 12:35:27.776924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.354 [2024-11-04 12:35:27.776931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.354 [2024-11-04 12:35:27.776940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.354 [2024-11-04 12:35:27.776948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.354 [2024-11-04 12:35:27.776958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.354 [2024-11-04 12:35:27.776965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.354 [2024-11-04 12:35:27.776974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.354 [2024-11-04 12:35:27.776982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.354 [2024-11-04 12:35:27.776992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.354 [2024-11-04 12:35:27.776999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.354 [2024-11-04 12:35:27.777009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.354 [2024-11-04 12:35:27.777016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.354 [2024-11-04 12:35:27.777069] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2566370 was disconnected and freed. reset controller. 00:30:53.354 [2024-11-04 12:35:27.778281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:53.354 task offset: 84096 on job bdev=Nvme0n1 fails 00:30:53.354 00:30:53.354 Latency(us) 00:30:53.354 [2024-11-04T11:35:27.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.354 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.354 Job: Nvme0n1 ended in about 0.43 seconds with error 00:30:53.354 Verification LBA range: start 0x0 length 0x400 00:30:53.354 Nvme0n1 : 0.43 1502.87 93.93 150.29 0.00 37569.28 1611.09 38010.88 00:30:53.354 [2024-11-04T11:35:27.924Z] =================================================================================================================== 00:30:53.354 [2024-11-04T11:35:27.924Z] Total : 1502.87 93.93 150.29 0.00 37569.28 1611.09 38010.88 00:30:53.354 [2024-11-04 12:35:27.780267] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:53.354 [2024-11-04 12:35:27.780289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234d0c0 (9): Bad file descriptor 00:30:53.354 [2024-11-04 12:35:27.781608] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:30:53.354 [2024-11-04 12:35:27.781684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:53.354 [2024-11-04 12:35:27.781714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.354 [2024-11-04 12:35:27.781731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:30:53.354 [2024-11-04 12:35:27.781740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:30:53.354 [2024-11-04 12:35:27.781755] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.354 [2024-11-04 12:35:27.781763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x234d0c0 00:30:53.354 [2024-11-04 12:35:27.781783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234d0c0 (9): Bad file descriptor 00:30:53.354 [2024-11-04 12:35:27.781797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:53.354 [2024-11-04 12:35:27.781804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:53.355 [2024-11-04 12:35:27.781813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:53.355 [2024-11-04 12:35:27.781826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:53.355 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.355 12:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:54.339 12:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1857709 00:30:54.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1857709) - No such process 00:30:54.339 12:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:54.339 12:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:54.339 12:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:54.339 12:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:54.339 12:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:30:54.339 12:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:30:54.339 12:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:54.339 12:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:54.339 { 00:30:54.339 "params": { 00:30:54.339 "name": "Nvme$subsystem", 00:30:54.339 "trtype": "$TEST_TRANSPORT", 00:30:54.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:54.339 "adrfam": "ipv4", 00:30:54.339 "trsvcid": "$NVMF_PORT", 00:30:54.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:54.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:54.339 "hdgst": ${hdgst:-false}, 00:30:54.339 "ddgst": ${ddgst:-false} 00:30:54.339 }, 00:30:54.339 "method": "bdev_nvme_attach_controller" 00:30:54.339 } 00:30:54.339 EOF 00:30:54.339 )") 00:30:54.339 12:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:30:54.339 12:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:30:54.339 12:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:30:54.339 12:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:54.339 "params": { 00:30:54.339 "name": "Nvme0", 00:30:54.339 "trtype": "tcp", 00:30:54.339 "traddr": "10.0.0.2", 00:30:54.339 "adrfam": "ipv4", 00:30:54.339 "trsvcid": "4420", 00:30:54.339 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:54.339 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:54.339 "hdgst": false, 00:30:54.339 "ddgst": false 00:30:54.339 }, 00:30:54.339 "method": "bdev_nvme_attach_controller" 00:30:54.339 }' 00:30:54.339 [2024-11-04 12:35:28.858354] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:30:54.339 [2024-11-04 12:35:28.858410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1858065 ] 00:30:54.600 [2024-11-04 12:35:28.918800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.600 [2024-11-04 12:35:28.955482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.860 Running I/O for 1 seconds... 00:30:55.801 1664.00 IOPS, 104.00 MiB/s 00:30:55.801 Latency(us) 00:30:55.801 [2024-11-04T11:35:30.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.801 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:55.801 Verification LBA range: start 0x0 length 0x400 00:30:55.801 Nvme0n1 : 1.03 1685.70 105.36 0.00 0.00 37301.89 6144.00 34078.72 00:30:55.801 [2024-11-04T11:35:30.371Z] =================================================================================================================== 00:30:55.801 [2024-11-04T11:35:30.371Z] Total : 1685.70 105.36 0.00 0.00 37301.89 6144.00 34078.72 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:56.062 rmmod nvme_tcp 00:30:56.062 rmmod nvme_fabrics 00:30:56.062 rmmod nvme_keyring 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1857485 ']' 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1857485 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1857485 ']' 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1857485 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1857485 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1857485' 00:30:56.062 killing process with pid 1857485 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1857485 00:30:56.062 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1857485 00:30:56.323 [2024-11-04 12:35:30.650734] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:56.323 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:56.323 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:56.323 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:56.323 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:56.323 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:30:56.323 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:30:56.323 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:56.323 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:56.323 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:56.323 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.323 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.323 12:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.237 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:58.237 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:58.237 00:30:58.237 real 0m14.345s 00:30:58.237 user 0m18.556s 00:30:58.237 sys 0m7.262s 00:30:58.237 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:58.237 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:58.237 ************************************ 00:30:58.237 END TEST nvmf_host_management 00:30:58.237 ************************************ 00:30:58.237 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:58.237 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:58.237 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:58.237 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:58.499 ************************************ 00:30:58.499 START TEST nvmf_lvol 00:30:58.499 ************************************ 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:58.499 * Looking for test storage... 00:30:58.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:58.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.499 --rc genhtml_branch_coverage=1 00:30:58.499 --rc genhtml_function_coverage=1 00:30:58.499 --rc genhtml_legend=1 00:30:58.499 --rc geninfo_all_blocks=1 00:30:58.499 --rc geninfo_unexecuted_blocks=1 00:30:58.499 00:30:58.499 ' 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:58.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.499 --rc genhtml_branch_coverage=1 00:30:58.499 --rc genhtml_function_coverage=1 00:30:58.499 --rc genhtml_legend=1 00:30:58.499 --rc geninfo_all_blocks=1 00:30:58.499 --rc geninfo_unexecuted_blocks=1 00:30:58.499 00:30:58.499 ' 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:58.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.499 --rc genhtml_branch_coverage=1 00:30:58.499 --rc genhtml_function_coverage=1 00:30:58.499 --rc genhtml_legend=1 00:30:58.499 --rc geninfo_all_blocks=1 00:30:58.499 --rc geninfo_unexecuted_blocks=1 00:30:58.499 00:30:58.499 ' 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:58.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.499 --rc genhtml_branch_coverage=1 00:30:58.499 --rc genhtml_function_coverage=1 00:30:58.499 --rc genhtml_legend=1 00:30:58.499 --rc geninfo_all_blocks=1 00:30:58.499 --rc geninfo_unexecuted_blocks=1 00:30:58.499 00:30:58.499 ' 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:58.499 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:58.500 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:58.500 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:58.500 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:58.500 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:58.500 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:58.500 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:58.500 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:58.500 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:58.500 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:58.500 12:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:58.500 12:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:05.088 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:05.089 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:05.089 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:05.089 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:05.089 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:05.089 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:05.350 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:05.350 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:05.350 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:05.350 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:05.350 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:05.350 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:05.350 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:05.350 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:05.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:05.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:31:05.610 00:31:05.610 --- 10.0.0.2 ping statistics --- 00:31:05.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.610 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:31:05.610 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:05.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:05.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:31:05.610 00:31:05.610 --- 10.0.0.1 ping statistics --- 00:31:05.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.610 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:31:05.610 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:05.610 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1862535 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1862535 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1862535 ']' 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:05.611 12:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:05.611 [2024-11-04 12:35:40.036842] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:05.611 [2024-11-04 12:35:40.038023] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:31:05.611 [2024-11-04 12:35:40.038078] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:05.611 [2024-11-04 12:35:40.110323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:05.611 [2024-11-04 12:35:40.155355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:05.611 [2024-11-04 12:35:40.155396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:05.611 [2024-11-04 12:35:40.155405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:05.611 [2024-11-04 12:35:40.155411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:05.611 [2024-11-04 12:35:40.155417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:05.611 [2024-11-04 12:35:40.156937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.611 [2024-11-04 12:35:40.157048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:05.611 [2024-11-04 12:35:40.157050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.871 [2024-11-04 12:35:40.214863] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:05.871 [2024-11-04 12:35:40.215486] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:05.871 [2024-11-04 12:35:40.215688] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:05.871 [2024-11-04 12:35:40.215966] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:06.442 12:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:06.442 12:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:31:06.442 12:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:06.442 12:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:06.442 12:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:06.442 12:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:06.442 12:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:06.442 [2024-11-04 12:35:41.009957] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:06.704 12:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:06.704 12:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:06.704 12:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:06.965 12:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:06.965 12:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:07.226 12:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:07.226 12:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=56b73bab-d3cd-40f4-be74-bffdde85aadf 00:31:07.226 12:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 56b73bab-d3cd-40f4-be74-bffdde85aadf lvol 20 00:31:07.486 12:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=cc84832f-6369-45c0-94c8-2c05aadb1183 00:31:07.487 12:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:07.747 12:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cc84832f-6369-45c0-94c8-2c05aadb1183 00:31:07.747 12:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:08.008 [2024-11-04 12:35:42.405769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.008 12:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:08.269 12:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1862933 00:31:08.269 12:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:08.269 12:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:09.212 12:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot cc84832f-6369-45c0-94c8-2c05aadb1183 MY_SNAPSHOT 00:31:09.472 12:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2713e518-52af-45c7-bef3-cb3ad57eadfe 00:31:09.472 12:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize cc84832f-6369-45c0-94c8-2c05aadb1183 30 00:31:09.733 12:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2713e518-52af-45c7-bef3-cb3ad57eadfe MY_CLONE 00:31:09.733 12:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ebb6cc39-dcf1-4a99-ba63-4a12adb01314 00:31:09.733 12:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ebb6cc39-dcf1-4a99-ba63-4a12adb01314 00:31:10.304 12:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1862933 00:31:20.301 Initializing NVMe Controllers 00:31:20.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:20.301 Controller IO queue size 128, less than required. 00:31:20.301 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:20.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:20.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:20.301 Initialization complete. Launching workers. 00:31:20.301 ======================================================== 00:31:20.301 Latency(us) 00:31:20.301 Device Information : IOPS MiB/s Average min max 00:31:20.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12351.20 48.25 10367.28 1653.75 53850.61 00:31:20.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16011.20 62.54 7996.09 1219.78 56939.33 00:31:20.302 ======================================================== 00:31:20.302 Total : 28362.40 110.79 9028.69 1219.78 56939.33 00:31:20.302 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cc84832f-6369-45c0-94c8-2c05aadb1183 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 56b73bab-d3cd-40f4-be74-bffdde85aadf 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:20.302 rmmod nvme_tcp 00:31:20.302 rmmod nvme_fabrics 00:31:20.302 rmmod nvme_keyring 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1862535 ']' 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1862535 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1862535 ']' 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1862535 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1862535 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1862535' 00:31:20.302 killing process with pid 1862535 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1862535 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1862535 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.302 12:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.686 12:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:21.686 00:31:21.686 real 0m23.132s 00:31:21.686 user 0m55.461s 00:31:21.686 sys 0m10.371s 00:31:21.686 12:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:21.686 12:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:21.686 ************************************ 00:31:21.686 END TEST nvmf_lvol 00:31:21.686 ************************************ 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:21.686 ************************************ 00:31:21.686 START TEST nvmf_lvs_grow 00:31:21.686 ************************************ 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:21.686 * Looking for test storage... 00:31:21.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:21.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.686 --rc genhtml_branch_coverage=1 00:31:21.686 --rc genhtml_function_coverage=1 00:31:21.686 --rc genhtml_legend=1 00:31:21.686 --rc geninfo_all_blocks=1 00:31:21.686 --rc geninfo_unexecuted_blocks=1 00:31:21.686 00:31:21.686 ' 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:21.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.686 --rc genhtml_branch_coverage=1 00:31:21.686 --rc genhtml_function_coverage=1 00:31:21.686 --rc genhtml_legend=1 00:31:21.686 --rc geninfo_all_blocks=1 00:31:21.686 --rc geninfo_unexecuted_blocks=1 00:31:21.686 00:31:21.686 ' 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:21.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.686 --rc genhtml_branch_coverage=1 00:31:21.686 --rc genhtml_function_coverage=1 00:31:21.686 --rc genhtml_legend=1 00:31:21.686 --rc geninfo_all_blocks=1 00:31:21.686 --rc geninfo_unexecuted_blocks=1 00:31:21.686 00:31:21.686 ' 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:21.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.686 --rc genhtml_branch_coverage=1 00:31:21.686 --rc genhtml_function_coverage=1 00:31:21.686 --rc genhtml_legend=1 00:31:21.686 --rc geninfo_all_blocks=1 00:31:21.686 --rc geninfo_unexecuted_blocks=1 00:31:21.686 00:31:21.686 ' 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:21.686 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:21.947 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:21.947 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:21.947 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:21.947 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:21.947 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:21.947 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:21.947 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:21.947 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:21.947 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:21.947 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:21.947 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:21.947 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:21.947 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.947 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:21.948 12:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:30.099 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:30.099 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:30.099 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:30.099 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:30.099 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:30.099 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:30.099 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:30.099 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:30.099 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:30.099 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:30.099 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:30.099 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:30.099 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:30.099 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:30.099 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:30.099 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:30.099 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:30.099 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:30.100 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:30.100 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:30.100 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:30.100 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:30.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:30.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:31:30.100 00:31:30.100 --- 10.0.0.2 ping statistics --- 00:31:30.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.100 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:30.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:30.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:31:30.100 00:31:30.100 --- 10.0.0.1 ping statistics --- 00:31:30.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.100 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:30.100 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:30.101 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:30.101 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:30.101 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:30.101 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:30.101 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:30.101 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1869367 00:31:30.101 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1869367 00:31:30.101 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:30.101 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1869367 ']' 00:31:30.101 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.101 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:30.101 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.101 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:30.101 12:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:30.101 [2024-11-04 12:36:03.698678] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:30.101 [2024-11-04 12:36:03.699832] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:31:30.101 [2024-11-04 12:36:03.699885] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:30.101 [2024-11-04 12:36:03.770336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.101 [2024-11-04 12:36:03.811979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:30.101 [2024-11-04 12:36:03.812015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:30.101 [2024-11-04 12:36:03.812023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:30.101 [2024-11-04 12:36:03.812030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:30.101 [2024-11-04 12:36:03.812036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:30.101 [2024-11-04 12:36:03.812639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.101 [2024-11-04 12:36:03.868512] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:30.101 [2024-11-04 12:36:03.868778] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:30.101 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:30.101 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:31:30.101 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:30.101 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:30.101 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:30.101 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:30.101 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:30.362 [2024-11-04 12:36:04.717434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.362 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:30.362 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:30.362 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:30.362 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:30.362 ************************************ 00:31:30.362 START TEST lvs_grow_clean 00:31:30.362 ************************************ 00:31:30.362 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:31:30.362 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:30.362 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:30.362 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:30.362 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:30.362 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:30.363 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:30.363 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:30.363 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:30.363 12:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:30.623 12:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:30.623 12:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:30.623 12:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f734fb63-b583-451e-a865-9755b127dd5d 00:31:30.884 12:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f734fb63-b583-451e-a865-9755b127dd5d 00:31:30.884 12:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:30.884 12:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:30.884 12:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:30.884 12:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f734fb63-b583-451e-a865-9755b127dd5d lvol 150 00:31:31.145 12:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5a746cae-9827-42ae-9322-a07309547f8c 00:31:31.145 12:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:31.145 12:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:31.145 [2024-11-04 12:36:05.669016] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:31.145 [2024-11-04 12:36:05.669098] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:31.145 true 00:31:31.145 12:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:31.145 12:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f734fb63-b583-451e-a865-9755b127dd5d 00:31:31.406 12:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:31.406 12:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:31.667 12:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5a746cae-9827-42ae-9322-a07309547f8c 00:31:31.928 12:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:31.928 [2024-11-04 12:36:06.401434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:31.928 12:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:32.189 12:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1869929 00:31:32.189 12:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:32.189 12:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:32.189 12:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1869929 /var/tmp/bdevperf.sock 00:31:32.189 12:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1869929 ']' 00:31:32.189 12:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:32.189 12:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:32.189 12:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:32.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:32.189 12:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:32.189 12:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:32.189 [2024-11-04 12:36:06.656384] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:31:32.189 [2024-11-04 12:36:06.656461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1869929 ] 00:31:32.189 [2024-11-04 12:36:06.737448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.450 [2024-11-04 12:36:06.790219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:33.021 12:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:33.021 12:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:31:33.021 12:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:33.282 Nvme0n1 00:31:33.282 12:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:33.543 [ 00:31:33.543 { 00:31:33.543 "name": "Nvme0n1", 00:31:33.543 "aliases": [ 00:31:33.543 "5a746cae-9827-42ae-9322-a07309547f8c" 00:31:33.543 ], 00:31:33.543 "product_name": "NVMe disk", 00:31:33.543 "block_size": 4096, 00:31:33.543 "num_blocks": 38912, 00:31:33.543 "uuid": "5a746cae-9827-42ae-9322-a07309547f8c", 00:31:33.543 "numa_id": 0, 00:31:33.543 "assigned_rate_limits": { 00:31:33.543 "rw_ios_per_sec": 0, 00:31:33.543 "rw_mbytes_per_sec": 0, 00:31:33.543 "r_mbytes_per_sec": 0, 00:31:33.543 "w_mbytes_per_sec": 0 00:31:33.543 }, 00:31:33.543 "claimed": false, 00:31:33.543 "zoned": false, 00:31:33.543 "supported_io_types": { 00:31:33.543 "read": true, 00:31:33.543 "write": true, 00:31:33.543 "unmap": true, 00:31:33.543 "flush": true, 00:31:33.543 "reset": true, 00:31:33.543 "nvme_admin": true, 00:31:33.543 "nvme_io": true, 00:31:33.543 "nvme_io_md": false, 00:31:33.543 "write_zeroes": true, 00:31:33.543 "zcopy": false, 00:31:33.543 "get_zone_info": false, 00:31:33.543 "zone_management": false, 00:31:33.543 "zone_append": false, 00:31:33.543 "compare": true, 00:31:33.543 "compare_and_write": true, 00:31:33.543 "abort": true, 00:31:33.543 "seek_hole": false, 00:31:33.543 "seek_data": false, 00:31:33.543 "copy": true, 00:31:33.543 "nvme_iov_md": false 00:31:33.543 }, 00:31:33.543 "memory_domains": [ 00:31:33.543 { 00:31:33.543 "dma_device_id": "system", 00:31:33.543 "dma_device_type": 1 00:31:33.543 } 00:31:33.543 ], 00:31:33.543 "driver_specific": { 00:31:33.543 "nvme": [ 00:31:33.543 { 00:31:33.543 "trid": { 00:31:33.543 "trtype": "TCP", 00:31:33.543 "adrfam": "IPv4", 00:31:33.543 "traddr": "10.0.0.2", 00:31:33.543 "trsvcid": "4420", 00:31:33.543 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:33.543 }, 00:31:33.543 "ctrlr_data": { 00:31:33.543 "cntlid": 1, 00:31:33.543 "vendor_id": "0x8086", 00:31:33.543 "model_number": "SPDK bdev Controller", 00:31:33.543 "serial_number": "SPDK0", 00:31:33.543 "firmware_revision": "25.01", 00:31:33.543 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:33.543 "oacs": { 00:31:33.543 "security": 0, 00:31:33.543 "format": 0, 00:31:33.543 "firmware": 0, 00:31:33.543 "ns_manage": 0 00:31:33.543 }, 00:31:33.543 "multi_ctrlr": true, 00:31:33.543 "ana_reporting": false 00:31:33.543 }, 00:31:33.543 "vs": { 00:31:33.543 "nvme_version": "1.3" 00:31:33.543 }, 00:31:33.543 "ns_data": { 00:31:33.543 "id": 1, 00:31:33.543 "can_share": true 00:31:33.543 } 00:31:33.543 } 00:31:33.543 ], 00:31:33.543 "mp_policy": "active_passive" 00:31:33.543 } 00:31:33.543 } 00:31:33.543 ] 00:31:33.543 12:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1870095 00:31:33.543 12:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:33.543 12:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:33.543 Running I/O for 10 seconds... 00:31:34.485 Latency(us) 00:31:34.485 [2024-11-04T11:36:09.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:34.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:34.485 Nvme0n1 : 1.00 17458.00 68.20 0.00 0.00 0.00 0.00 0.00 00:31:34.485 [2024-11-04T11:36:09.055Z] =================================================================================================================== 00:31:34.485 [2024-11-04T11:36:09.055Z] Total : 17458.00 68.20 0.00 0.00 0.00 0.00 0.00 00:31:34.485 00:31:35.426 12:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f734fb63-b583-451e-a865-9755b127dd5d 00:31:35.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:35.686 Nvme0n1 : 2.00 17581.00 68.68 0.00 0.00 0.00 0.00 0.00 00:31:35.686 [2024-11-04T11:36:10.256Z] =================================================================================================================== 00:31:35.686 [2024-11-04T11:36:10.256Z] Total : 17581.00 68.68 0.00 0.00 0.00 0.00 0.00 00:31:35.686 00:31:35.686 true 00:31:35.686 12:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f734fb63-b583-451e-a865-9755b127dd5d 00:31:35.686 12:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:35.947 12:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:35.947 12:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:35.947 12:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1870095 00:31:36.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:36.519 Nvme0n1 : 3.00 17618.33 68.82 0.00 0.00 0.00 0.00 0.00 00:31:36.519 [2024-11-04T11:36:11.089Z] =================================================================================================================== 00:31:36.519 [2024-11-04T11:36:11.089Z] Total : 17618.33 68.82 0.00 0.00 0.00 0.00 0.00 00:31:36.519 00:31:37.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:37.901 Nvme0n1 : 4.00 17646.00 68.93 0.00 0.00 0.00 0.00 0.00 00:31:37.901 [2024-11-04T11:36:12.471Z] =================================================================================================================== 00:31:37.901 [2024-11-04T11:36:12.471Z] Total : 17646.00 68.93 0.00 0.00 0.00 0.00 0.00 00:31:37.901 00:31:38.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:38.866 Nvme0n1 : 5.00 17675.20 69.04 0.00 0.00 0.00 0.00 0.00 00:31:38.866 [2024-11-04T11:36:13.436Z] =================================================================================================================== 00:31:38.866 [2024-11-04T11:36:13.436Z] Total : 17675.20 69.04 0.00 0.00 0.00 0.00 0.00 00:31:38.866 00:31:39.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:39.836 Nvme0n1 : 6.00 17694.67 69.12 0.00 0.00 0.00 0.00 0.00 00:31:39.836 [2024-11-04T11:36:14.406Z] =================================================================================================================== 00:31:39.836 [2024-11-04T11:36:14.406Z] Total : 17694.67 69.12 0.00 0.00 0.00 0.00 0.00 00:31:39.836 00:31:40.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:40.778 Nvme0n1 : 7.00 17708.57 69.17 0.00 0.00 0.00 0.00 0.00 00:31:40.778 [2024-11-04T11:36:15.348Z] =================================================================================================================== 00:31:40.778 [2024-11-04T11:36:15.348Z] Total : 17708.57 69.17 0.00 0.00 0.00 0.00 0.00 00:31:40.778 00:31:41.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:41.725 Nvme0n1 : 8.00 17724.00 69.23 0.00 0.00 0.00 0.00 0.00 00:31:41.725 [2024-11-04T11:36:16.295Z] =================================================================================================================== 00:31:41.725 [2024-11-04T11:36:16.295Z] Total : 17724.00 69.23 0.00 0.00 0.00 0.00 0.00 00:31:41.725 00:31:42.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:42.667 Nvme0n1 : 9.00 17742.89 69.31 0.00 0.00 0.00 0.00 0.00 00:31:42.667 [2024-11-04T11:36:17.237Z] =================================================================================================================== 00:31:42.667 [2024-11-04T11:36:17.237Z] Total : 17742.89 69.31 0.00 0.00 0.00 0.00 0.00 00:31:42.667 00:31:43.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:43.610 Nvme0n1 : 10.00 17754.30 69.35 0.00 0.00 0.00 0.00 0.00 00:31:43.610 [2024-11-04T11:36:18.180Z] =================================================================================================================== 00:31:43.610 [2024-11-04T11:36:18.180Z] Total : 17754.30 69.35 0.00 0.00 0.00 0.00 0.00 00:31:43.610 00:31:43.610 00:31:43.610 Latency(us) 00:31:43.610 [2024-11-04T11:36:18.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:43.610 Nvme0n1 : 10.00 17752.97 69.35 0.00 0.00 7205.42 4423.68 16384.00 00:31:43.610 [2024-11-04T11:36:18.180Z] =================================================================================================================== 00:31:43.610 [2024-11-04T11:36:18.180Z] Total : 17752.97 69.35 0.00 0.00 7205.42 4423.68 16384.00 00:31:43.610 { 00:31:43.610 "results": [ 00:31:43.610 { 00:31:43.610 "job": "Nvme0n1", 00:31:43.610 "core_mask": "0x2", 00:31:43.610 "workload": "randwrite", 00:31:43.610 "status": "finished", 00:31:43.610 "queue_depth": 128, 00:31:43.610 "io_size": 4096, 00:31:43.610 "runtime": 10.0043, 00:31:43.610 "iops": 17752.966224523454, 00:31:43.610 "mibps": 69.34752431454474, 00:31:43.610 "io_failed": 0, 00:31:43.610 "io_timeout": 0, 00:31:43.610 "avg_latency_us": 7205.415788205353, 00:31:43.610 "min_latency_us": 4423.68, 00:31:43.610 "max_latency_us": 16384.0 00:31:43.610 } 00:31:43.610 ], 00:31:43.610 "core_count": 1 00:31:43.610 } 00:31:43.610 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1869929 00:31:43.610 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1869929 ']' 00:31:43.610 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1869929 00:31:43.610 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:31:43.610 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:43.610 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1869929 00:31:43.610 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:43.610 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:43.610 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1869929' 00:31:43.610 killing process with pid 1869929 00:31:43.610 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1869929 00:31:43.610 Received shutdown signal, test time was about 10.000000 seconds 00:31:43.610 00:31:43.610 Latency(us) 00:31:43.610 [2024-11-04T11:36:18.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.610 [2024-11-04T11:36:18.180Z] =================================================================================================================== 00:31:43.610 [2024-11-04T11:36:18.180Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:43.610 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1869929 00:31:43.871 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:43.871 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:44.133 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f734fb63-b583-451e-a865-9755b127dd5d 00:31:44.133 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:44.394 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:44.394 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:44.394 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:44.394 [2024-11-04 12:36:18.945191] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:44.656 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f734fb63-b583-451e-a865-9755b127dd5d 00:31:44.656 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:31:44.656 12:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f734fb63-b583-451e-a865-9755b127dd5d 00:31:44.656 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:44.656 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:44.656 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:44.656 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:44.656 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:44.656 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:44.656 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:44.656 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:44.656 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f734fb63-b583-451e-a865-9755b127dd5d 00:31:44.656 request: 00:31:44.656 { 00:31:44.656 "uuid": "f734fb63-b583-451e-a865-9755b127dd5d", 00:31:44.656 "method": "bdev_lvol_get_lvstores", 00:31:44.656 "req_id": 1 00:31:44.656 } 00:31:44.656 Got JSON-RPC error response 00:31:44.656 response: 00:31:44.656 { 00:31:44.656 "code": -19, 00:31:44.656 "message": "No such device" 00:31:44.656 } 00:31:44.656 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:31:44.656 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:44.656 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:44.656 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:44.656 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:44.918 aio_bdev 00:31:44.918 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5a746cae-9827-42ae-9322-a07309547f8c 00:31:44.918 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=5a746cae-9827-42ae-9322-a07309547f8c 00:31:44.918 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:44.918 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:31:44.918 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:44.918 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:44.918 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:45.179 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5a746cae-9827-42ae-9322-a07309547f8c -t 2000 00:31:45.179 [ 00:31:45.179 { 00:31:45.179 "name": "5a746cae-9827-42ae-9322-a07309547f8c", 00:31:45.179 "aliases": [ 00:31:45.179 "lvs/lvol" 00:31:45.179 ], 00:31:45.179 "product_name": "Logical Volume", 00:31:45.179 "block_size": 4096, 00:31:45.179 "num_blocks": 38912, 00:31:45.179 "uuid": "5a746cae-9827-42ae-9322-a07309547f8c", 00:31:45.179 "assigned_rate_limits": { 00:31:45.179 "rw_ios_per_sec": 0, 00:31:45.179 "rw_mbytes_per_sec": 0, 00:31:45.179 "r_mbytes_per_sec": 0, 00:31:45.179 "w_mbytes_per_sec": 0 00:31:45.179 }, 00:31:45.179 "claimed": false, 00:31:45.179 "zoned": false, 00:31:45.179 "supported_io_types": { 00:31:45.179 "read": true, 00:31:45.179 "write": true, 00:31:45.179 "unmap": true, 00:31:45.179 "flush": false, 00:31:45.179 "reset": true, 00:31:45.179 "nvme_admin": false, 00:31:45.179 "nvme_io": false, 00:31:45.179 "nvme_io_md": false, 00:31:45.179 "write_zeroes": true, 00:31:45.179 "zcopy": false, 00:31:45.179 "get_zone_info": false, 00:31:45.179 "zone_management": false, 00:31:45.179 "zone_append": false, 00:31:45.179 "compare": false, 00:31:45.179 "compare_and_write": false, 00:31:45.179 "abort": false, 00:31:45.179 "seek_hole": true, 00:31:45.179 "seek_data": true, 00:31:45.179 "copy": false, 00:31:45.179 "nvme_iov_md": false 00:31:45.179 }, 00:31:45.179 "driver_specific": { 00:31:45.179 "lvol": { 00:31:45.179 "lvol_store_uuid": "f734fb63-b583-451e-a865-9755b127dd5d", 00:31:45.179 "base_bdev": "aio_bdev", 00:31:45.179 "thin_provision": false, 00:31:45.179 "num_allocated_clusters": 38, 00:31:45.179 "snapshot": false, 00:31:45.179 "clone": false, 00:31:45.179 "esnap_clone": false 00:31:45.179 } 00:31:45.179 } 00:31:45.179 } 00:31:45.179 ] 00:31:45.179 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:31:45.179 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f734fb63-b583-451e-a865-9755b127dd5d 00:31:45.179 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:45.440 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:45.440 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f734fb63-b583-451e-a865-9755b127dd5d 00:31:45.440 12:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:45.701 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:45.701 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5a746cae-9827-42ae-9322-a07309547f8c 00:31:45.701 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f734fb63-b583-451e-a865-9755b127dd5d 00:31:45.961 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:46.223 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:46.223 00:31:46.223 real 0m15.878s 00:31:46.223 user 0m15.484s 00:31:46.223 sys 0m1.491s 00:31:46.223 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:46.223 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:46.223 ************************************ 00:31:46.223 END TEST lvs_grow_clean 00:31:46.223 ************************************ 00:31:46.223 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:46.223 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:46.223 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:46.223 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:46.223 ************************************ 00:31:46.223 START TEST lvs_grow_dirty 00:31:46.223 ************************************ 00:31:46.223 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:31:46.223 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:46.223 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:46.223 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:46.223 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:46.223 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:46.223 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:46.223 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:46.223 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:46.223 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:46.485 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:46.485 12:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:46.746 12:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b578d5ab-074b-4a84-96f2-fe07e3d3104d 00:31:46.746 12:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b578d5ab-074b-4a84-96f2-fe07e3d3104d 00:31:46.746 12:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:47.007 12:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:47.007 12:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:47.007 12:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b578d5ab-074b-4a84-96f2-fe07e3d3104d lvol 150 00:31:47.007 12:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9be4ec39-d5b4-4ba1-acb0-4230bb4429ed 00:31:47.007 12:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:47.007 12:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:47.269 [2024-11-04 12:36:21.669116] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:47.269 [2024-11-04 12:36:21.669284] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:47.269 true 00:31:47.269 12:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b578d5ab-074b-4a84-96f2-fe07e3d3104d 00:31:47.269 12:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:47.529 12:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:47.529 12:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:47.529 12:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9be4ec39-d5b4-4ba1-acb0-4230bb4429ed 00:31:47.791 12:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:48.053 [2024-11-04 12:36:22.381420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:48.053 12:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:48.053 12:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1873384 00:31:48.053 12:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:48.053 12:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:48.053 12:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1873384 /var/tmp/bdevperf.sock 00:31:48.053 12:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1873384 ']' 00:31:48.053 12:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:48.053 12:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:48.053 12:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:48.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:48.053 12:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:48.053 12:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:48.315 [2024-11-04 12:36:22.623914] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:31:48.315 [2024-11-04 12:36:22.623991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1873384 ] 00:31:48.315 [2024-11-04 12:36:22.703199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.315 [2024-11-04 12:36:22.735633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.885 12:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:48.885 12:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:31:48.885 12:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:49.146 Nvme0n1 00:31:49.146 12:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:49.407 [ 00:31:49.407 { 00:31:49.407 "name": "Nvme0n1", 00:31:49.407 "aliases": [ 00:31:49.407 "9be4ec39-d5b4-4ba1-acb0-4230bb4429ed" 00:31:49.407 ], 00:31:49.407 "product_name": "NVMe disk", 00:31:49.407 "block_size": 4096, 00:31:49.407 "num_blocks": 38912, 00:31:49.407 "uuid": "9be4ec39-d5b4-4ba1-acb0-4230bb4429ed", 00:31:49.407 "numa_id": 0, 00:31:49.407 "assigned_rate_limits": { 00:31:49.407 "rw_ios_per_sec": 0, 00:31:49.407 "rw_mbytes_per_sec": 0, 00:31:49.407 "r_mbytes_per_sec": 0, 00:31:49.407 "w_mbytes_per_sec": 0 00:31:49.407 }, 00:31:49.407 "claimed": false, 00:31:49.407 "zoned": false, 00:31:49.407 "supported_io_types": { 00:31:49.407 "read": true, 00:31:49.407 "write": true, 00:31:49.407 "unmap": true, 00:31:49.407 "flush": true, 00:31:49.407 "reset": true, 00:31:49.407 "nvme_admin": true, 00:31:49.407 "nvme_io": true, 00:31:49.407 "nvme_io_md": false, 00:31:49.407 "write_zeroes": true, 00:31:49.407 "zcopy": false, 00:31:49.407 "get_zone_info": false, 00:31:49.407 "zone_management": false, 00:31:49.407 "zone_append": false, 00:31:49.407 "compare": true, 00:31:49.407 "compare_and_write": true, 00:31:49.407 "abort": true, 00:31:49.407 "seek_hole": false, 00:31:49.407 "seek_data": false, 00:31:49.407 "copy": true, 00:31:49.407 "nvme_iov_md": false 00:31:49.407 }, 00:31:49.407 "memory_domains": [ 00:31:49.407 { 00:31:49.407 "dma_device_id": "system", 00:31:49.407 "dma_device_type": 1 00:31:49.407 } 00:31:49.407 ], 00:31:49.407 "driver_specific": { 00:31:49.407 "nvme": [ 00:31:49.407 { 00:31:49.407 "trid": { 00:31:49.407 "trtype": "TCP", 00:31:49.407 "adrfam": "IPv4", 00:31:49.407 "traddr": "10.0.0.2", 00:31:49.407 "trsvcid": "4420", 00:31:49.407 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:49.407 }, 00:31:49.407 "ctrlr_data": { 00:31:49.407 "cntlid": 1, 00:31:49.407 "vendor_id": "0x8086", 00:31:49.407 "model_number": "SPDK bdev Controller", 00:31:49.407 "serial_number": "SPDK0", 00:31:49.407 "firmware_revision": "25.01", 00:31:49.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:49.407 "oacs": { 00:31:49.407 "security": 0, 00:31:49.407 "format": 0, 00:31:49.407 "firmware": 0, 00:31:49.407 "ns_manage": 0 00:31:49.407 }, 00:31:49.407 "multi_ctrlr": true, 00:31:49.407 "ana_reporting": false 00:31:49.407 }, 00:31:49.407 "vs": { 00:31:49.407 "nvme_version": "1.3" 00:31:49.407 }, 00:31:49.407 "ns_data": { 00:31:49.407 "id": 1, 00:31:49.407 "can_share": true 00:31:49.407 } 00:31:49.407 } 00:31:49.407 ], 00:31:49.407 "mp_policy": "active_passive" 00:31:49.407 } 00:31:49.407 } 00:31:49.407 ] 00:31:49.407 12:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1873631 00:31:49.407 12:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:49.408 12:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:49.408 Running I/O for 10 seconds... 00:31:50.350 Latency(us) 00:31:50.350 [2024-11-04T11:36:24.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:50.350 Nvme0n1 : 1.00 17478.00 68.27 0.00 0.00 0.00 0.00 0.00 00:31:50.350 [2024-11-04T11:36:24.920Z] =================================================================================================================== 00:31:50.350 [2024-11-04T11:36:24.920Z] Total : 17478.00 68.27 0.00 0.00 0.00 0.00 0.00 00:31:50.350 00:31:51.291 12:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b578d5ab-074b-4a84-96f2-fe07e3d3104d 00:31:51.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:51.551 Nvme0n1 : 2.00 17571.00 68.64 0.00 0.00 0.00 0.00 0.00 00:31:51.551 [2024-11-04T11:36:26.121Z] =================================================================================================================== 00:31:51.551 [2024-11-04T11:36:26.121Z] Total : 17571.00 68.64 0.00 0.00 0.00 0.00 0.00 00:31:51.551 00:31:51.551 true 00:31:51.551 12:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b578d5ab-074b-4a84-96f2-fe07e3d3104d 00:31:51.551 12:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:51.812 12:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:51.812 12:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:51.812 12:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1873631 00:31:52.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:52.382 Nvme0n1 : 3.00 17602.00 68.76 0.00 0.00 0.00 0.00 0.00 00:31:52.382 [2024-11-04T11:36:26.952Z] =================================================================================================================== 00:31:52.382 [2024-11-04T11:36:26.952Z] Total : 17602.00 68.76 0.00 0.00 0.00 0.00 0.00 00:31:52.382 00:31:53.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:53.765 Nvme0n1 : 4.00 17633.75 68.88 0.00 0.00 0.00 0.00 0.00 00:31:53.765 [2024-11-04T11:36:28.335Z] =================================================================================================================== 00:31:53.765 [2024-11-04T11:36:28.335Z] Total : 17633.75 68.88 0.00 0.00 0.00 0.00 0.00 00:31:53.765 00:31:54.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:54.706 Nvme0n1 : 5.00 17652.40 68.95 0.00 0.00 0.00 0.00 0.00 00:31:54.706 [2024-11-04T11:36:29.276Z] =================================================================================================================== 00:31:54.706 [2024-11-04T11:36:29.276Z] Total : 17652.40 68.95 0.00 0.00 0.00 0.00 0.00 00:31:54.706 00:31:55.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:55.645 Nvme0n1 : 6.00 17675.67 69.05 0.00 0.00 0.00 0.00 0.00 00:31:55.645 [2024-11-04T11:36:30.215Z] =================================================================================================================== 00:31:55.645 [2024-11-04T11:36:30.215Z] Total : 17675.67 69.05 0.00 0.00 0.00 0.00 0.00 00:31:55.645 00:31:56.586 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:56.586 Nvme0n1 : 7.00 17683.29 69.08 0.00 0.00 0.00 0.00 0.00 00:31:56.586 [2024-11-04T11:36:31.156Z] =================================================================================================================== 00:31:56.586 [2024-11-04T11:36:31.156Z] Total : 17683.29 69.08 0.00 0.00 0.00 0.00 0.00 00:31:56.586 00:31:57.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:57.527 Nvme0n1 : 8.00 17696.88 69.13 0.00 0.00 0.00 0.00 0.00 00:31:57.527 [2024-11-04T11:36:32.097Z] =================================================================================================================== 00:31:57.528 [2024-11-04T11:36:32.098Z] Total : 17696.88 69.13 0.00 0.00 0.00 0.00 0.00 00:31:57.528 00:31:58.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:58.468 Nvme0n1 : 9.00 17707.44 69.17 0.00 0.00 0.00 0.00 0.00 00:31:58.468 [2024-11-04T11:36:33.038Z] =================================================================================================================== 00:31:58.468 [2024-11-04T11:36:33.038Z] Total : 17707.44 69.17 0.00 0.00 0.00 0.00 0.00 00:31:58.468 00:31:59.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:59.409 Nvme0n1 : 10.00 17715.90 69.20 0.00 0.00 0.00 0.00 0.00 00:31:59.409 [2024-11-04T11:36:33.979Z] =================================================================================================================== 00:31:59.409 [2024-11-04T11:36:33.979Z] Total : 17715.90 69.20 0.00 0.00 0.00 0.00 0.00 00:31:59.409 00:31:59.409 00:31:59.409 Latency(us) 00:31:59.409 [2024-11-04T11:36:33.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:59.409 Nvme0n1 : 10.01 17718.20 69.21 0.00 0.00 7220.33 1693.01 13107.20 00:31:59.409 [2024-11-04T11:36:33.979Z] =================================================================================================================== 00:31:59.409 [2024-11-04T11:36:33.979Z] Total : 17718.20 69.21 0.00 0.00 7220.33 1693.01 13107.20 00:31:59.409 { 00:31:59.409 "results": [ 00:31:59.409 { 00:31:59.409 "job": "Nvme0n1", 00:31:59.409 "core_mask": "0x2", 00:31:59.409 "workload": "randwrite", 00:31:59.409 "status": "finished", 00:31:59.409 "queue_depth": 128, 00:31:59.409 "io_size": 4096, 00:31:59.409 "runtime": 10.005926, 00:31:59.409 "iops": 17718.200194564703, 00:31:59.409 "mibps": 69.21171951001837, 00:31:59.409 "io_failed": 0, 00:31:59.409 "io_timeout": 0, 00:31:59.409 "avg_latency_us": 7220.333972071649, 00:31:59.409 "min_latency_us": 1693.0133333333333, 00:31:59.409 "max_latency_us": 13107.2 00:31:59.409 } 00:31:59.409 ], 00:31:59.409 "core_count": 1 00:31:59.409 } 00:31:59.409 12:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1873384 00:31:59.409 12:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1873384 ']' 00:31:59.409 12:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1873384 00:31:59.409 12:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:31:59.409 12:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:59.409 12:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1873384 00:31:59.670 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:59.670 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:59.670 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1873384' 00:31:59.670 killing process with pid 1873384 00:31:59.670 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1873384 00:31:59.670 Received shutdown signal, test time was about 10.000000 seconds 00:31:59.670 00:31:59.670 Latency(us) 00:31:59.670 [2024-11-04T11:36:34.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.670 [2024-11-04T11:36:34.240Z] =================================================================================================================== 00:31:59.670 [2024-11-04T11:36:34.240Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:59.670 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1873384 00:31:59.670 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:59.930 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:59.930 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:59.930 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b578d5ab-074b-4a84-96f2-fe07e3d3104d 00:32:00.191 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:00.191 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:00.191 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1869367 00:32:00.191 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1869367 00:32:00.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1869367 Killed "${NVMF_APP[@]}" "$@" 00:32:00.191 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:00.191 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:00.191 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:00.191 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:00.191 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:00.191 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1875659 00:32:00.191 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1875659 00:32:00.191 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:00.191 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1875659 ']' 00:32:00.191 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.191 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:00.191 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.191 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:00.192 12:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:00.192 [2024-11-04 12:36:34.753233] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:00.192 [2024-11-04 12:36:34.754275] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:32:00.192 [2024-11-04 12:36:34.754330] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.453 [2024-11-04 12:36:34.824377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.453 [2024-11-04 12:36:34.864824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.453 [2024-11-04 12:36:34.864861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.453 [2024-11-04 12:36:34.864871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.453 [2024-11-04 12:36:34.864880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.453 [2024-11-04 12:36:34.864888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.453 [2024-11-04 12:36:34.865464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.453 [2024-11-04 12:36:34.921156] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:00.453 [2024-11-04 12:36:34.921420] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:01.025 12:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:01.025 12:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:32:01.025 12:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:01.025 12:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:01.025 12:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:01.025 12:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:01.286 12:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:01.286 [2024-11-04 12:36:35.764662] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:01.286 [2024-11-04 12:36:35.764819] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:01.286 [2024-11-04 12:36:35.764854] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:01.286 12:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:01.286 12:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9be4ec39-d5b4-4ba1-acb0-4230bb4429ed 00:32:01.286 12:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=9be4ec39-d5b4-4ba1-acb0-4230bb4429ed 00:32:01.286 12:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:01.286 12:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:32:01.286 12:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:01.286 12:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:01.286 12:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:01.547 12:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9be4ec39-d5b4-4ba1-acb0-4230bb4429ed -t 2000 00:32:01.807 [ 00:32:01.807 { 00:32:01.807 "name": "9be4ec39-d5b4-4ba1-acb0-4230bb4429ed", 00:32:01.807 "aliases": [ 00:32:01.807 "lvs/lvol" 00:32:01.807 ], 00:32:01.807 "product_name": "Logical Volume", 00:32:01.807 "block_size": 4096, 00:32:01.807 "num_blocks": 38912, 00:32:01.807 "uuid": "9be4ec39-d5b4-4ba1-acb0-4230bb4429ed", 00:32:01.807 "assigned_rate_limits": { 00:32:01.807 "rw_ios_per_sec": 0, 00:32:01.807 "rw_mbytes_per_sec": 0, 00:32:01.807 "r_mbytes_per_sec": 0, 00:32:01.807 "w_mbytes_per_sec": 0 00:32:01.807 }, 00:32:01.807 "claimed": false, 00:32:01.807 "zoned": false, 00:32:01.807 "supported_io_types": { 00:32:01.807 "read": true, 00:32:01.807 "write": true, 00:32:01.807 "unmap": true, 00:32:01.807 "flush": false, 00:32:01.807 "reset": true, 00:32:01.807 "nvme_admin": false, 00:32:01.807 "nvme_io": false, 00:32:01.807 "nvme_io_md": false, 00:32:01.807 "write_zeroes": true, 00:32:01.807 "zcopy": false, 00:32:01.807 "get_zone_info": false, 00:32:01.807 "zone_management": false, 00:32:01.807 "zone_append": false, 00:32:01.807 "compare": false, 00:32:01.807 "compare_and_write": false, 00:32:01.807 "abort": false, 00:32:01.807 "seek_hole": true, 00:32:01.807 "seek_data": true, 00:32:01.807 "copy": false, 00:32:01.807 "nvme_iov_md": false 00:32:01.807 }, 00:32:01.807 "driver_specific": { 00:32:01.807 "lvol": { 00:32:01.807 "lvol_store_uuid": "b578d5ab-074b-4a84-96f2-fe07e3d3104d", 00:32:01.807 "base_bdev": "aio_bdev", 00:32:01.807 "thin_provision": false, 00:32:01.807 "num_allocated_clusters": 38, 00:32:01.807 "snapshot": false, 00:32:01.807 "clone": false, 00:32:01.807 "esnap_clone": false 00:32:01.807 } 00:32:01.807 } 00:32:01.807 } 00:32:01.807 ] 00:32:01.807 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:32:01.807 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b578d5ab-074b-4a84-96f2-fe07e3d3104d 00:32:01.807 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:01.807 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:01.807 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b578d5ab-074b-4a84-96f2-fe07e3d3104d 00:32:01.807 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:02.069 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:02.069 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:02.331 [2024-11-04 12:36:36.641930] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:02.331 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b578d5ab-074b-4a84-96f2-fe07e3d3104d 00:32:02.331 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:32:02.331 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b578d5ab-074b-4a84-96f2-fe07e3d3104d 00:32:02.331 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:02.331 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:02.331 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:02.331 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:02.331 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:02.331 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:02.331 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:02.331 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:02.331 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b578d5ab-074b-4a84-96f2-fe07e3d3104d 00:32:02.331 request: 00:32:02.331 { 00:32:02.331 "uuid": "b578d5ab-074b-4a84-96f2-fe07e3d3104d", 00:32:02.331 "method": "bdev_lvol_get_lvstores", 00:32:02.331 "req_id": 1 00:32:02.331 } 00:32:02.331 Got JSON-RPC error response 00:32:02.331 response: 00:32:02.331 { 00:32:02.331 "code": -19, 00:32:02.331 "message": "No such device" 00:32:02.331 } 00:32:02.331 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:32:02.331 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:02.331 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:02.331 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:02.331 12:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:02.592 aio_bdev 00:32:02.592 12:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9be4ec39-d5b4-4ba1-acb0-4230bb4429ed 00:32:02.592 12:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=9be4ec39-d5b4-4ba1-acb0-4230bb4429ed 00:32:02.592 12:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:02.592 12:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:32:02.592 12:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:02.592 12:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:02.592 12:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:02.853 12:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9be4ec39-d5b4-4ba1-acb0-4230bb4429ed -t 2000 00:32:02.853 [ 00:32:02.853 { 00:32:02.853 "name": "9be4ec39-d5b4-4ba1-acb0-4230bb4429ed", 00:32:02.853 "aliases": [ 00:32:02.853 "lvs/lvol" 00:32:02.853 ], 00:32:02.853 "product_name": "Logical Volume", 00:32:02.853 "block_size": 4096, 00:32:02.853 "num_blocks": 38912, 00:32:02.853 "uuid": "9be4ec39-d5b4-4ba1-acb0-4230bb4429ed", 00:32:02.853 "assigned_rate_limits": { 00:32:02.853 "rw_ios_per_sec": 0, 00:32:02.853 "rw_mbytes_per_sec": 0, 00:32:02.853 "r_mbytes_per_sec": 0, 00:32:02.853 "w_mbytes_per_sec": 0 00:32:02.853 }, 00:32:02.853 "claimed": false, 00:32:02.853 "zoned": false, 00:32:02.853 "supported_io_types": { 00:32:02.853 "read": true, 00:32:02.853 "write": true, 00:32:02.853 "unmap": true, 00:32:02.853 "flush": false, 00:32:02.853 "reset": true, 00:32:02.853 "nvme_admin": false, 00:32:02.853 "nvme_io": false, 00:32:02.853 "nvme_io_md": false, 00:32:02.853 "write_zeroes": true, 00:32:02.853 "zcopy": false, 00:32:02.853 "get_zone_info": false, 00:32:02.853 "zone_management": false, 00:32:02.853 "zone_append": false, 00:32:02.853 "compare": false, 00:32:02.853 "compare_and_write": false, 00:32:02.853 "abort": false, 00:32:02.853 "seek_hole": true, 00:32:02.853 "seek_data": true, 00:32:02.853 "copy": false, 00:32:02.853 "nvme_iov_md": false 00:32:02.853 }, 00:32:02.853 "driver_specific": { 00:32:02.853 "lvol": { 00:32:02.853 "lvol_store_uuid": "b578d5ab-074b-4a84-96f2-fe07e3d3104d", 00:32:02.853 "base_bdev": "aio_bdev", 00:32:02.853 "thin_provision": false, 00:32:02.853 "num_allocated_clusters": 38, 00:32:02.853 "snapshot": false, 00:32:02.853 "clone": false, 00:32:02.853 "esnap_clone": false 00:32:02.853 } 00:32:02.853 } 00:32:02.853 } 00:32:02.853 ] 00:32:03.114 12:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:32:03.114 12:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b578d5ab-074b-4a84-96f2-fe07e3d3104d 00:32:03.114 12:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:03.114 12:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:03.114 12:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b578d5ab-074b-4a84-96f2-fe07e3d3104d 00:32:03.114 12:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:03.375 12:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:03.375 12:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9be4ec39-d5b4-4ba1-acb0-4230bb4429ed 00:32:03.638 12:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b578d5ab-074b-4a84-96f2-fe07e3d3104d 00:32:03.638 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:03.900 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:03.900 00:32:03.900 real 0m17.637s 00:32:03.900 user 0m35.475s 00:32:03.900 sys 0m3.020s 00:32:03.900 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:03.900 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:03.900 ************************************ 00:32:03.900 END TEST lvs_grow_dirty 00:32:03.900 ************************************ 00:32:03.900 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:03.900 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:32:03.900 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:32:03.900 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:32:03.900 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:03.900 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:32:03.900 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:32:03.900 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:32:03.900 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:03.900 nvmf_trace.0 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:04.161 rmmod nvme_tcp 00:32:04.161 rmmod nvme_fabrics 00:32:04.161 rmmod nvme_keyring 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1875659 ']' 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1875659 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1875659 ']' 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1875659 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1875659 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1875659' 00:32:04.161 killing process with pid 1875659 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1875659 00:32:04.161 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1875659 00:32:04.422 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:04.422 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:04.422 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:04.422 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:04.422 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:32:04.422 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:04.422 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:32:04.422 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:04.422 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:04.422 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.422 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:04.422 12:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.334 12:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:06.334 00:32:06.334 real 0m44.778s 00:32:06.334 user 0m53.813s 00:32:06.334 sys 0m10.634s 00:32:06.334 12:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:06.334 12:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:06.334 ************************************ 00:32:06.334 END TEST nvmf_lvs_grow 00:32:06.334 ************************************ 00:32:06.334 12:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:06.334 12:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:06.334 12:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:06.334 12:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:06.596 ************************************ 00:32:06.596 START TEST nvmf_bdev_io_wait 00:32:06.596 ************************************ 00:32:06.596 12:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:06.596 * Looking for test storage... 00:32:06.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:06.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.596 --rc genhtml_branch_coverage=1 00:32:06.596 --rc genhtml_function_coverage=1 00:32:06.596 --rc genhtml_legend=1 00:32:06.596 --rc geninfo_all_blocks=1 00:32:06.596 --rc geninfo_unexecuted_blocks=1 00:32:06.596 00:32:06.596 ' 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:06.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.596 --rc genhtml_branch_coverage=1 00:32:06.596 --rc genhtml_function_coverage=1 00:32:06.596 --rc genhtml_legend=1 00:32:06.596 --rc geninfo_all_blocks=1 00:32:06.596 --rc geninfo_unexecuted_blocks=1 00:32:06.596 00:32:06.596 ' 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:06.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.596 --rc genhtml_branch_coverage=1 00:32:06.596 --rc genhtml_function_coverage=1 00:32:06.596 --rc genhtml_legend=1 00:32:06.596 --rc geninfo_all_blocks=1 00:32:06.596 --rc geninfo_unexecuted_blocks=1 00:32:06.596 00:32:06.596 ' 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:06.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.596 --rc genhtml_branch_coverage=1 00:32:06.596 --rc genhtml_function_coverage=1 00:32:06.596 --rc genhtml_legend=1 00:32:06.596 --rc geninfo_all_blocks=1 00:32:06.596 --rc geninfo_unexecuted_blocks=1 00:32:06.596 00:32:06.596 ' 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:06.596 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:06.597 12:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:14.736 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:14.736 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:14.736 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.736 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:14.737 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:14.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:14.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:32:14.737 00:32:14.737 --- 10.0.0.2 ping statistics --- 00:32:14.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.737 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:14.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:14.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:32:14.737 00:32:14.737 --- 10.0.0.1 ping statistics --- 00:32:14.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.737 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1880712 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1880712 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1880712 ']' 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:14.737 12:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:14.737 [2024-11-04 12:36:48.517897] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:14.737 [2024-11-04 12:36:48.519034] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:32:14.737 [2024-11-04 12:36:48.519084] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:14.737 [2024-11-04 12:36:48.589969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:14.737 [2024-11-04 12:36:48.633848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:14.737 [2024-11-04 12:36:48.633884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:14.737 [2024-11-04 12:36:48.633892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:14.737 [2024-11-04 12:36:48.633901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:14.737 [2024-11-04 12:36:48.633907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:14.737 [2024-11-04 12:36:48.635703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:14.737 [2024-11-04 12:36:48.635843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:14.737 [2024-11-04 12:36:48.635903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.737 [2024-11-04 12:36:48.635903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:14.737 [2024-11-04 12:36:48.636343] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:14.998 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:14.998 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:32:14.998 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:14.999 [2024-11-04 12:36:49.405538] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:14.999 [2024-11-04 12:36:49.405788] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:14.999 [2024-11-04 12:36:49.406474] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:14.999 [2024-11-04 12:36:49.406619] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:14.999 [2024-11-04 12:36:49.416845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:14.999 Malloc0 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:14.999 [2024-11-04 12:36:49.480719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1880767 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1880769 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:14.999 { 00:32:14.999 "params": { 00:32:14.999 "name": "Nvme$subsystem", 00:32:14.999 "trtype": "$TEST_TRANSPORT", 00:32:14.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:14.999 "adrfam": "ipv4", 00:32:14.999 "trsvcid": "$NVMF_PORT", 00:32:14.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:14.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:14.999 "hdgst": ${hdgst:-false}, 00:32:14.999 "ddgst": ${ddgst:-false} 00:32:14.999 }, 00:32:14.999 "method": "bdev_nvme_attach_controller" 00:32:14.999 } 00:32:14.999 EOF 00:32:14.999 )") 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1880771 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:14.999 { 00:32:14.999 "params": { 00:32:14.999 "name": "Nvme$subsystem", 00:32:14.999 "trtype": "$TEST_TRANSPORT", 00:32:14.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:14.999 "adrfam": "ipv4", 00:32:14.999 "trsvcid": "$NVMF_PORT", 00:32:14.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:14.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:14.999 "hdgst": ${hdgst:-false}, 00:32:14.999 "ddgst": ${ddgst:-false} 00:32:14.999 }, 00:32:14.999 "method": "bdev_nvme_attach_controller" 00:32:14.999 } 00:32:14.999 EOF 00:32:14.999 )") 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1880774 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:14.999 { 00:32:14.999 "params": { 00:32:14.999 "name": "Nvme$subsystem", 00:32:14.999 "trtype": "$TEST_TRANSPORT", 00:32:14.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:14.999 "adrfam": "ipv4", 00:32:14.999 "trsvcid": "$NVMF_PORT", 00:32:14.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:14.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:14.999 "hdgst": ${hdgst:-false}, 00:32:14.999 "ddgst": ${ddgst:-false} 00:32:14.999 }, 00:32:14.999 "method": "bdev_nvme_attach_controller" 00:32:14.999 } 00:32:14.999 EOF 00:32:14.999 )") 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:14.999 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:14.999 { 00:32:14.999 "params": { 00:32:14.999 "name": "Nvme$subsystem", 00:32:14.999 "trtype": "$TEST_TRANSPORT", 00:32:14.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:14.999 "adrfam": "ipv4", 00:32:14.999 "trsvcid": "$NVMF_PORT", 00:32:14.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:14.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:14.999 "hdgst": ${hdgst:-false}, 00:32:14.999 "ddgst": ${ddgst:-false} 00:32:14.999 }, 00:32:14.999 "method": "bdev_nvme_attach_controller" 00:32:14.999 } 00:32:14.999 EOF 00:32:15.000 )") 00:32:15.000 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:15.000 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1880767 00:32:15.000 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:15.000 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:15.000 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:15.000 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:15.000 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:15.000 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:15.000 "params": { 00:32:15.000 "name": "Nvme1", 00:32:15.000 "trtype": "tcp", 00:32:15.000 "traddr": "10.0.0.2", 00:32:15.000 "adrfam": "ipv4", 00:32:15.000 "trsvcid": "4420", 00:32:15.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:15.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:15.000 "hdgst": false, 00:32:15.000 "ddgst": false 00:32:15.000 }, 00:32:15.000 "method": "bdev_nvme_attach_controller" 00:32:15.000 }' 00:32:15.000 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:15.000 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:15.000 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:15.000 "params": { 00:32:15.000 "name": "Nvme1", 00:32:15.000 "trtype": "tcp", 00:32:15.000 "traddr": "10.0.0.2", 00:32:15.000 "adrfam": "ipv4", 00:32:15.000 "trsvcid": "4420", 00:32:15.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:15.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:15.000 "hdgst": false, 00:32:15.000 "ddgst": false 00:32:15.000 }, 00:32:15.000 "method": "bdev_nvme_attach_controller" 00:32:15.000 }' 00:32:15.000 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:15.000 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:15.000 "params": { 00:32:15.000 "name": "Nvme1", 00:32:15.000 "trtype": "tcp", 00:32:15.000 "traddr": "10.0.0.2", 00:32:15.000 "adrfam": "ipv4", 00:32:15.000 "trsvcid": "4420", 00:32:15.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:15.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:15.000 "hdgst": false, 00:32:15.000 "ddgst": false 00:32:15.000 }, 00:32:15.000 "method": "bdev_nvme_attach_controller" 00:32:15.000 }' 00:32:15.000 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:15.000 12:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:15.000 "params": { 00:32:15.000 "name": "Nvme1", 00:32:15.000 "trtype": "tcp", 00:32:15.000 "traddr": "10.0.0.2", 00:32:15.000 "adrfam": "ipv4", 00:32:15.000 "trsvcid": "4420", 00:32:15.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:15.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:15.000 "hdgst": false, 00:32:15.000 "ddgst": false 00:32:15.000 }, 00:32:15.000 "method": "bdev_nvme_attach_controller" 00:32:15.000 }' 00:32:15.000 [2024-11-04 12:36:49.534627] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:32:15.000 [2024-11-04 12:36:49.534684] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:15.000 [2024-11-04 12:36:49.537085] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:32:15.000 [2024-11-04 12:36:49.537135] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:15.000 [2024-11-04 12:36:49.538974] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:32:15.000 [2024-11-04 12:36:49.539019] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:15.000 [2024-11-04 12:36:49.540417] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:32:15.000 [2024-11-04 12:36:49.540463] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:15.270 [2024-11-04 12:36:49.680614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.270 [2024-11-04 12:36:49.710149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:15.270 [2024-11-04 12:36:49.724354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.270 [2024-11-04 12:36:49.752356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:15.270 [2024-11-04 12:36:49.782313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.270 [2024-11-04 12:36:49.811638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:15.270 [2024-11-04 12:36:49.831173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.530 [2024-11-04 12:36:49.859611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:15.530 Running I/O for 1 seconds... 00:32:15.530 Running I/O for 1 seconds... 00:32:15.530 Running I/O for 1 seconds... 00:32:15.530 Running I/O for 1 seconds... 00:32:16.472 17982.00 IOPS, 70.24 MiB/s 00:32:16.472 Latency(us) 00:32:16.472 [2024-11-04T11:36:51.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.472 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:16.472 Nvme1n1 : 1.01 18026.44 70.42 0.00 0.00 7082.74 3208.53 9338.88 00:32:16.472 [2024-11-04T11:36:51.042Z] =================================================================================================================== 00:32:16.472 [2024-11-04T11:36:51.042Z] Total : 18026.44 70.42 0.00 0.00 7082.74 3208.53 9338.88 00:32:16.472 12337.00 IOPS, 48.19 MiB/s 00:32:16.472 Latency(us) 00:32:16.472 [2024-11-04T11:36:51.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.472 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:16.472 Nvme1n1 : 1.01 12411.87 48.48 0.00 0.00 10278.97 2648.75 14090.24 00:32:16.472 [2024-11-04T11:36:51.042Z] =================================================================================================================== 00:32:16.472 [2024-11-04T11:36:51.042Z] Total : 12411.87 48.48 0.00 0.00 10278.97 2648.75 14090.24 00:32:16.472 184512.00 IOPS, 720.75 MiB/s 00:32:16.472 Latency(us) 00:32:16.472 [2024-11-04T11:36:51.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.472 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:16.472 Nvme1n1 : 1.00 184148.94 719.33 0.00 0.00 690.85 300.37 1966.08 00:32:16.472 [2024-11-04T11:36:51.042Z] =================================================================================================================== 00:32:16.472 [2024-11-04T11:36:51.042Z] Total : 184148.94 719.33 0.00 0.00 690.85 300.37 1966.08 00:32:16.734 11556.00 IOPS, 45.14 MiB/s 00:32:16.734 Latency(us) 00:32:16.734 [2024-11-04T11:36:51.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.734 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:16.734 Nvme1n1 : 1.01 11631.92 45.44 0.00 0.00 10969.17 2143.57 17803.95 00:32:16.734 [2024-11-04T11:36:51.304Z] =================================================================================================================== 00:32:16.734 [2024-11-04T11:36:51.304Z] Total : 11631.92 45.44 0.00 0.00 10969.17 2143.57 17803.95 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1880769 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1880771 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1880774 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:16.734 rmmod nvme_tcp 00:32:16.734 rmmod nvme_fabrics 00:32:16.734 rmmod nvme_keyring 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1880712 ']' 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1880712 00:32:16.734 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1880712 ']' 00:32:16.735 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1880712 00:32:16.735 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:32:16.735 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:16.735 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1880712 00:32:16.735 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:16.735 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:16.735 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1880712' 00:32:16.735 killing process with pid 1880712 00:32:16.735 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1880712 00:32:16.735 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1880712 00:32:16.995 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:16.995 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:16.995 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:16.995 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:16.995 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:32:16.995 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:16.995 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:32:16.995 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:16.995 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:16.995 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.995 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:16.995 12:36:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.544 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:19.544 00:32:19.544 real 0m12.585s 00:32:19.544 user 0m14.472s 00:32:19.544 sys 0m7.330s 00:32:19.544 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:19.544 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:19.544 ************************************ 00:32:19.544 END TEST nvmf_bdev_io_wait 00:32:19.544 ************************************ 00:32:19.544 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:19.544 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:19.544 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:19.544 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:19.544 ************************************ 00:32:19.544 START TEST nvmf_queue_depth 00:32:19.544 ************************************ 00:32:19.544 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:19.544 * Looking for test storage... 00:32:19.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:19.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.545 --rc genhtml_branch_coverage=1 00:32:19.545 --rc genhtml_function_coverage=1 00:32:19.545 --rc genhtml_legend=1 00:32:19.545 --rc geninfo_all_blocks=1 00:32:19.545 --rc geninfo_unexecuted_blocks=1 00:32:19.545 00:32:19.545 ' 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:19.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.545 --rc genhtml_branch_coverage=1 00:32:19.545 --rc genhtml_function_coverage=1 00:32:19.545 --rc genhtml_legend=1 00:32:19.545 --rc geninfo_all_blocks=1 00:32:19.545 --rc geninfo_unexecuted_blocks=1 00:32:19.545 00:32:19.545 ' 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:19.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.545 --rc genhtml_branch_coverage=1 00:32:19.545 --rc genhtml_function_coverage=1 00:32:19.545 --rc genhtml_legend=1 00:32:19.545 --rc geninfo_all_blocks=1 00:32:19.545 --rc geninfo_unexecuted_blocks=1 00:32:19.545 00:32:19.545 ' 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:19.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:19.545 --rc genhtml_branch_coverage=1 00:32:19.545 --rc genhtml_function_coverage=1 00:32:19.545 --rc genhtml_legend=1 00:32:19.545 --rc geninfo_all_blocks=1 00:32:19.545 --rc geninfo_unexecuted_blocks=1 00:32:19.545 00:32:19.545 ' 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:19.545 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:19.546 12:36:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:27.693 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:27.694 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:27.694 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:27.694 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:27.694 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:27.694 12:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:27.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:27.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:32:27.694 00:32:27.694 --- 10.0.0.2 ping statistics --- 00:32:27.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:27.694 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:27.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:27.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:32:27.694 00:32:27.694 --- 10.0.0.1 ping statistics --- 00:32:27.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:27.694 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1885423 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1885423 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1885423 ']' 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:27.694 12:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.695 [2024-11-04 12:37:01.328129] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:27.695 [2024-11-04 12:37:01.329684] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:32:27.695 [2024-11-04 12:37:01.329772] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:27.695 [2024-11-04 12:37:01.421037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.695 [2024-11-04 12:37:01.471249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:27.695 [2024-11-04 12:37:01.471295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:27.695 [2024-11-04 12:37:01.471304] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:27.695 [2024-11-04 12:37:01.471316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:27.695 [2024-11-04 12:37:01.471322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:27.695 [2024-11-04 12:37:01.472117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.695 [2024-11-04 12:37:01.548230] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:27.695 [2024-11-04 12:37:01.548520] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.695 [2024-11-04 12:37:02.184965] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.695 Malloc0 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.695 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.957 [2024-11-04 12:37:02.261171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:27.957 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.957 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1885479 00:32:27.957 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:27.957 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:27.957 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1885479 /var/tmp/bdevperf.sock 00:32:27.957 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1885479 ']' 00:32:27.957 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:27.957 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:27.957 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:27.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:27.957 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:27.957 12:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:27.957 [2024-11-04 12:37:02.321734] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:32:27.957 [2024-11-04 12:37:02.321814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1885479 ] 00:32:27.957 [2024-11-04 12:37:02.388876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.957 [2024-11-04 12:37:02.431917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.900 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:28.900 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:32:28.900 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:28.900 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.900 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:28.900 NVMe0n1 00:32:28.900 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.900 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:28.900 Running I/O for 10 seconds... 00:32:31.227 9214.00 IOPS, 35.99 MiB/s [2024-11-04T11:37:06.739Z] 9220.00 IOPS, 36.02 MiB/s [2024-11-04T11:37:07.683Z] 9560.00 IOPS, 37.34 MiB/s [2024-11-04T11:37:08.627Z] 10244.25 IOPS, 40.02 MiB/s [2024-11-04T11:37:09.569Z] 10658.20 IOPS, 41.63 MiB/s [2024-11-04T11:37:10.633Z] 10939.83 IOPS, 42.73 MiB/s [2024-11-04T11:37:11.613Z] 11169.43 IOPS, 43.63 MiB/s [2024-11-04T11:37:12.555Z] 11329.25 IOPS, 44.25 MiB/s [2024-11-04T11:37:13.498Z] 11467.00 IOPS, 44.79 MiB/s [2024-11-04T11:37:13.759Z] 11569.60 IOPS, 45.19 MiB/s 00:32:39.190 Latency(us) 00:32:39.190 [2024-11-04T11:37:13.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.190 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:39.190 Verification LBA range: start 0x0 length 0x4000 00:32:39.190 NVMe0n1 : 10.06 11596.05 45.30 0.00 0.00 87991.45 24466.77 71215.79 00:32:39.190 [2024-11-04T11:37:13.760Z] =================================================================================================================== 00:32:39.190 [2024-11-04T11:37:13.760Z] Total : 11596.05 45.30 0.00 0.00 87991.45 24466.77 71215.79 00:32:39.190 { 00:32:39.190 "results": [ 00:32:39.190 { 00:32:39.190 "job": "NVMe0n1", 00:32:39.190 "core_mask": "0x1", 00:32:39.190 "workload": "verify", 00:32:39.190 "status": "finished", 00:32:39.190 "verify_range": { 00:32:39.190 "start": 0, 00:32:39.190 "length": 16384 00:32:39.190 }, 00:32:39.190 "queue_depth": 1024, 00:32:39.190 "io_size": 4096, 00:32:39.190 "runtime": 10.063688, 00:32:39.190 "iops": 11596.04709525971, 00:32:39.190 "mibps": 45.29705896585824, 00:32:39.190 "io_failed": 0, 00:32:39.190 "io_timeout": 0, 00:32:39.190 "avg_latency_us": 87991.44577554222, 00:32:39.190 "min_latency_us": 24466.773333333334, 00:32:39.190 "max_latency_us": 71215.78666666667 00:32:39.190 } 00:32:39.190 ], 00:32:39.190 "core_count": 1 00:32:39.190 } 00:32:39.190 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1885479 00:32:39.190 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1885479 ']' 00:32:39.190 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1885479 00:32:39.190 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:32:39.190 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:39.190 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1885479 00:32:39.190 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:39.190 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:39.190 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1885479' 00:32:39.190 killing process with pid 1885479 00:32:39.190 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1885479 00:32:39.190 Received shutdown signal, test time was about 10.000000 seconds 00:32:39.190 00:32:39.190 Latency(us) 00:32:39.190 [2024-11-04T11:37:13.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.190 [2024-11-04T11:37:13.760Z] =================================================================================================================== 00:32:39.190 [2024-11-04T11:37:13.760Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:39.190 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1885479 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:39.451 rmmod nvme_tcp 00:32:39.451 rmmod nvme_fabrics 00:32:39.451 rmmod nvme_keyring 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1885423 ']' 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1885423 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1885423 ']' 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1885423 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1885423 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1885423' 00:32:39.451 killing process with pid 1885423 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1885423 00:32:39.451 12:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1885423 00:32:39.451 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:39.451 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:39.451 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:39.711 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:39.711 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:32:39.711 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:39.711 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:32:39.711 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:39.711 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:39.711 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.711 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.711 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.622 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:41.622 00:32:41.622 real 0m22.529s 00:32:41.622 user 0m24.947s 00:32:41.622 sys 0m7.312s 00:32:41.622 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:41.622 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:41.622 ************************************ 00:32:41.622 END TEST nvmf_queue_depth 00:32:41.622 ************************************ 00:32:41.622 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:41.622 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:41.622 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:41.622 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:41.622 ************************************ 00:32:41.622 START TEST nvmf_target_multipath 00:32:41.622 ************************************ 00:32:41.623 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:41.884 * Looking for test storage... 00:32:41.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:41.884 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:41.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.885 --rc genhtml_branch_coverage=1 00:32:41.885 --rc genhtml_function_coverage=1 00:32:41.885 --rc genhtml_legend=1 00:32:41.885 --rc geninfo_all_blocks=1 00:32:41.885 --rc geninfo_unexecuted_blocks=1 00:32:41.885 00:32:41.885 ' 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:41.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.885 --rc genhtml_branch_coverage=1 00:32:41.885 --rc genhtml_function_coverage=1 00:32:41.885 --rc genhtml_legend=1 00:32:41.885 --rc geninfo_all_blocks=1 00:32:41.885 --rc geninfo_unexecuted_blocks=1 00:32:41.885 00:32:41.885 ' 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:41.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.885 --rc genhtml_branch_coverage=1 00:32:41.885 --rc genhtml_function_coverage=1 00:32:41.885 --rc genhtml_legend=1 00:32:41.885 --rc geninfo_all_blocks=1 00:32:41.885 --rc geninfo_unexecuted_blocks=1 00:32:41.885 00:32:41.885 ' 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:41.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.885 --rc genhtml_branch_coverage=1 00:32:41.885 --rc genhtml_function_coverage=1 00:32:41.885 --rc genhtml_legend=1 00:32:41.885 --rc geninfo_all_blocks=1 00:32:41.885 --rc geninfo_unexecuted_blocks=1 00:32:41.885 00:32:41.885 ' 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.885 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:41.886 12:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:50.023 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.023 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:50.023 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:50.024 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:50.024 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:50.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:50.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:32:50.024 00:32:50.024 --- 10.0.0.2 ping statistics --- 00:32:50.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.024 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:50.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:50.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:32:50.024 00:32:50.024 --- 10.0.0.1 ping statistics --- 00:32:50.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.024 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:50.024 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:32:50.025 only one NIC for nvmf test 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:50.025 rmmod nvme_tcp 00:32:50.025 rmmod nvme_fabrics 00:32:50.025 rmmod nvme_keyring 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:50.025 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:51.408 00:32:51.408 real 0m9.750s 00:32:51.408 user 0m2.178s 00:32:51.408 sys 0m5.522s 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:51.408 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:51.408 ************************************ 00:32:51.408 END TEST nvmf_target_multipath 00:32:51.408 ************************************ 00:32:51.670 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:51.670 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:51.670 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:51.670 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:51.670 ************************************ 00:32:51.670 START TEST nvmf_zcopy 00:32:51.670 ************************************ 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:51.670 * Looking for test storage... 00:32:51.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:51.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.670 --rc genhtml_branch_coverage=1 00:32:51.670 --rc genhtml_function_coverage=1 00:32:51.670 --rc genhtml_legend=1 00:32:51.670 --rc geninfo_all_blocks=1 00:32:51.670 --rc geninfo_unexecuted_blocks=1 00:32:51.670 00:32:51.670 ' 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:51.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.670 --rc genhtml_branch_coverage=1 00:32:51.670 --rc genhtml_function_coverage=1 00:32:51.670 --rc genhtml_legend=1 00:32:51.670 --rc geninfo_all_blocks=1 00:32:51.670 --rc geninfo_unexecuted_blocks=1 00:32:51.670 00:32:51.670 ' 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:51.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.670 --rc genhtml_branch_coverage=1 00:32:51.670 --rc genhtml_function_coverage=1 00:32:51.670 --rc genhtml_legend=1 00:32:51.670 --rc geninfo_all_blocks=1 00:32:51.670 --rc geninfo_unexecuted_blocks=1 00:32:51.670 00:32:51.670 ' 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:51.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.670 --rc genhtml_branch_coverage=1 00:32:51.670 --rc genhtml_function_coverage=1 00:32:51.670 --rc genhtml_legend=1 00:32:51.670 --rc geninfo_all_blocks=1 00:32:51.670 --rc geninfo_unexecuted_blocks=1 00:32:51.670 00:32:51.670 ' 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:51.670 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:51.931 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:51.931 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:51.931 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:51.931 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:51.931 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:51.931 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:51.931 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:51.931 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:32:51.932 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:00.068 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:00.068 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:00.069 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:00.069 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:00.069 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:00.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:00.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:33:00.069 00:33:00.069 --- 10.0.0.2 ping statistics --- 00:33:00.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.069 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:00.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:00.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:33:00.069 00:33:00.069 --- 10.0.0.1 ping statistics --- 00:33:00.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.069 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1895953 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1895953 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1895953 ']' 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:00.069 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:00.069 [2024-11-04 12:37:33.498219] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:00.069 [2024-11-04 12:37:33.499378] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:33:00.069 [2024-11-04 12:37:33.499433] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.069 [2024-11-04 12:37:33.586190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.069 [2024-11-04 12:37:33.636405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:00.069 [2024-11-04 12:37:33.636450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:00.069 [2024-11-04 12:37:33.636459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:00.069 [2024-11-04 12:37:33.636466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:00.069 [2024-11-04 12:37:33.636472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:00.069 [2024-11-04 12:37:33.637184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.070 [2024-11-04 12:37:33.707927] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:00.070 [2024-11-04 12:37:33.708216] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:00.070 [2024-11-04 12:37:34.342050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:00.070 [2024-11-04 12:37:34.370340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:00.070 malloc0 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:00.070 { 00:33:00.070 "params": { 00:33:00.070 "name": "Nvme$subsystem", 00:33:00.070 "trtype": "$TEST_TRANSPORT", 00:33:00.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:00.070 "adrfam": "ipv4", 00:33:00.070 "trsvcid": "$NVMF_PORT", 00:33:00.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:00.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:00.070 "hdgst": ${hdgst:-false}, 00:33:00.070 "ddgst": ${ddgst:-false} 00:33:00.070 }, 00:33:00.070 "method": "bdev_nvme_attach_controller" 00:33:00.070 } 00:33:00.070 EOF 00:33:00.070 )") 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:33:00.070 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:00.070 "params": { 00:33:00.070 "name": "Nvme1", 00:33:00.070 "trtype": "tcp", 00:33:00.070 "traddr": "10.0.0.2", 00:33:00.070 "adrfam": "ipv4", 00:33:00.070 "trsvcid": "4420", 00:33:00.070 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:00.070 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:00.070 "hdgst": false, 00:33:00.070 "ddgst": false 00:33:00.070 }, 00:33:00.070 "method": "bdev_nvme_attach_controller" 00:33:00.070 }' 00:33:00.070 [2024-11-04 12:37:34.456298] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:33:00.070 [2024-11-04 12:37:34.456357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896134 ] 00:33:00.070 [2024-11-04 12:37:34.517869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.070 [2024-11-04 12:37:34.555266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.331 Running I/O for 10 seconds... 00:33:02.211 6543.00 IOPS, 51.12 MiB/s [2024-11-04T11:37:37.721Z] 6581.00 IOPS, 51.41 MiB/s [2024-11-04T11:37:39.101Z] 6593.00 IOPS, 51.51 MiB/s [2024-11-04T11:37:40.041Z] 6602.50 IOPS, 51.58 MiB/s [2024-11-04T11:37:40.981Z] 6609.80 IOPS, 51.64 MiB/s [2024-11-04T11:37:41.921Z] 6609.00 IOPS, 51.63 MiB/s [2024-11-04T11:37:42.860Z] 6611.14 IOPS, 51.65 MiB/s [2024-11-04T11:37:43.799Z] 6967.75 IOPS, 54.44 MiB/s [2024-11-04T11:37:44.739Z] 7258.56 IOPS, 56.71 MiB/s [2024-11-04T11:37:44.739Z] 7491.90 IOPS, 58.53 MiB/s 00:33:10.169 Latency(us) 00:33:10.169 [2024-11-04T11:37:44.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.169 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:10.169 Verification LBA range: start 0x0 length 0x1000 00:33:10.169 Nvme1n1 : 10.01 7497.06 58.57 0.00 0.00 17019.25 2252.80 26105.17 00:33:10.169 [2024-11-04T11:37:44.739Z] =================================================================================================================== 00:33:10.169 [2024-11-04T11:37:44.739Z] Total : 7497.06 58.57 0.00 0.00 17019.25 2252.80 26105.17 00:33:10.429 12:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1898132 00:33:10.429 12:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:10.429 12:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:10.429 12:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:10.429 12:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:10.429 12:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:33:10.429 12:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:33:10.429 12:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:10.429 12:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:10.429 { 00:33:10.429 "params": { 00:33:10.429 "name": "Nvme$subsystem", 00:33:10.429 "trtype": "$TEST_TRANSPORT", 00:33:10.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:10.429 "adrfam": "ipv4", 00:33:10.429 "trsvcid": "$NVMF_PORT", 00:33:10.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:10.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:10.429 "hdgst": ${hdgst:-false}, 00:33:10.429 "ddgst": ${ddgst:-false} 00:33:10.429 }, 00:33:10.429 "method": "bdev_nvme_attach_controller" 00:33:10.429 } 00:33:10.429 EOF 00:33:10.429 )") 00:33:10.429 [2024-11-04 12:37:44.849568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.429 [2024-11-04 12:37:44.849596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.429 12:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:33:10.429 12:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:33:10.429 12:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:33:10.429 12:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:10.429 "params": { 00:33:10.429 "name": "Nvme1", 00:33:10.429 "trtype": "tcp", 00:33:10.429 "traddr": "10.0.0.2", 00:33:10.429 "adrfam": "ipv4", 00:33:10.429 "trsvcid": "4420", 00:33:10.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:10.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:10.429 "hdgst": false, 00:33:10.429 "ddgst": false 00:33:10.429 }, 00:33:10.429 "method": "bdev_nvme_attach_controller" 00:33:10.429 }' 00:33:10.429 [2024-11-04 12:37:44.861534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.430 [2024-11-04 12:37:44.861542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.430 [2024-11-04 12:37:44.873533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.430 [2024-11-04 12:37:44.873540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.430 [2024-11-04 12:37:44.885533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.430 [2024-11-04 12:37:44.885540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.430 [2024-11-04 12:37:44.893590] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:33:10.430 [2024-11-04 12:37:44.893638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1898132 ] 00:33:10.430 [2024-11-04 12:37:44.897532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.430 [2024-11-04 12:37:44.897539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.430 [2024-11-04 12:37:44.909532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.430 [2024-11-04 12:37:44.909538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.430 [2024-11-04 12:37:44.921532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.430 [2024-11-04 12:37:44.921540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.430 [2024-11-04 12:37:44.933532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.430 [2024-11-04 12:37:44.933539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.430 [2024-11-04 12:37:44.945532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.430 [2024-11-04 12:37:44.945539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.430 [2024-11-04 12:37:44.952548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.430 [2024-11-04 12:37:44.957533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.430 [2024-11-04 12:37:44.957542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.430 [2024-11-04 12:37:44.969533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.430 [2024-11-04 12:37:44.969542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.430 [2024-11-04 12:37:44.981533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.430 [2024-11-04 12:37:44.981542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.430 [2024-11-04 12:37:44.987912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:10.430 [2024-11-04 12:37:44.993532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.430 [2024-11-04 12:37:44.993540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.690 [2024-11-04 12:37:45.005538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.690 [2024-11-04 12:37:45.005550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.690 [2024-11-04 12:37:45.017535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.690 [2024-11-04 12:37:45.017547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.690 [2024-11-04 12:37:45.029533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.690 [2024-11-04 12:37:45.029541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.690 [2024-11-04 12:37:45.041533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.690 [2024-11-04 12:37:45.041542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.690 [2024-11-04 12:37:45.053532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.690 [2024-11-04 12:37:45.053538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.690 [2024-11-04 12:37:45.065547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.690 [2024-11-04 12:37:45.065563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.690 [2024-11-04 12:37:45.077535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.690 [2024-11-04 12:37:45.077544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.690 [2024-11-04 12:37:45.089534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.690 [2024-11-04 12:37:45.089548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.690 [2024-11-04 12:37:45.101533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.690 [2024-11-04 12:37:45.101539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.690 [2024-11-04 12:37:45.113532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.690 [2024-11-04 12:37:45.113538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.690 [2024-11-04 12:37:45.125532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.690 [2024-11-04 12:37:45.125540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.690 [2024-11-04 12:37:45.137533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.690 [2024-11-04 12:37:45.137542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.690 [2024-11-04 12:37:45.149534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.690 [2024-11-04 12:37:45.149543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.690 [2024-11-04 12:37:45.197235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.690 [2024-11-04 12:37:45.197250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.690 [2024-11-04 12:37:45.205536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.690 [2024-11-04 12:37:45.205548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.690 Running I/O for 5 seconds... 00:33:10.690 [2024-11-04 12:37:45.220777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.690 [2024-11-04 12:37:45.220794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.690 [2024-11-04 12:37:45.233344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.690 [2024-11-04 12:37:45.233361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.690 [2024-11-04 12:37:45.246311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.690 [2024-11-04 12:37:45.246327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.260766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.260783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.274118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.274132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.288919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.288935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.301717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.301731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.317152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.317167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.330556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.330570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.345169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.345184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.358138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.358152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.372626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.372642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.385571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.385586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.397250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.397265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.410456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.410470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.424628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.424643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.437120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.437135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.449588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.449603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.461987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.462001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.476627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.476642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.489767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.489782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.501313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.501328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.951 [2024-11-04 12:37:45.514117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.951 [2024-11-04 12:37:45.514131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.212 [2024-11-04 12:37:45.528661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.212 [2024-11-04 12:37:45.528677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.212 [2024-11-04 12:37:45.541994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.212 [2024-11-04 12:37:45.542009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.212 [2024-11-04 12:37:45.556480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.212 [2024-11-04 12:37:45.556495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.212 [2024-11-04 12:37:45.569449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.212 [2024-11-04 12:37:45.569464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.212 [2024-11-04 12:37:45.581527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.212 [2024-11-04 12:37:45.581542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.212 [2024-11-04 12:37:45.594226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.212 [2024-11-04 12:37:45.594240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.212 [2024-11-04 12:37:45.608635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.212 [2024-11-04 12:37:45.608650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.212 [2024-11-04 12:37:45.622004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.212 [2024-11-04 12:37:45.622018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.212 [2024-11-04 12:37:45.636588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.212 [2024-11-04 12:37:45.636602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.212 [2024-11-04 12:37:45.649597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.212 [2024-11-04 12:37:45.649612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.212 [2024-11-04 12:37:45.661483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.212 [2024-11-04 12:37:45.661498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.212 [2024-11-04 12:37:45.674334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.212 [2024-11-04 12:37:45.674349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.212 [2024-11-04 12:37:45.689436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.212 [2024-11-04 12:37:45.689451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.212 [2024-11-04 12:37:45.701759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.212 [2024-11-04 12:37:45.701773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.212 [2024-11-04 12:37:45.716835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.212 [2024-11-04 12:37:45.716850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.212 [2024-11-04 12:37:45.729730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.212 [2024-11-04 12:37:45.729750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.212 [2024-11-04 12:37:45.742050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.212 [2024-11-04 12:37:45.742064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.212 [2024-11-04 12:37:45.757008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.212 [2024-11-04 12:37:45.757023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.212 [2024-11-04 12:37:45.769985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.212 [2024-11-04 12:37:45.770000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.473 [2024-11-04 12:37:45.784458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.473 [2024-11-04 12:37:45.784473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.473 [2024-11-04 12:37:45.797821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.473 [2024-11-04 12:37:45.797835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.473 [2024-11-04 12:37:45.812498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.473 [2024-11-04 12:37:45.812513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.473 [2024-11-04 12:37:45.825527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.473 [2024-11-04 12:37:45.825542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.473 [2024-11-04 12:37:45.838110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.473 [2024-11-04 12:37:45.838124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.473 [2024-11-04 12:37:45.852824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.473 [2024-11-04 12:37:45.852839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-11-04 12:37:45.865596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-11-04 12:37:45.865611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-11-04 12:37:45.878272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-11-04 12:37:45.878287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-11-04 12:37:45.892330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-11-04 12:37:45.892345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-11-04 12:37:45.905059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-11-04 12:37:45.905074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-11-04 12:37:45.917477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-11-04 12:37:45.917493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-11-04 12:37:45.930163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-11-04 12:37:45.930178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-11-04 12:37:45.944818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-11-04 12:37:45.944833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-11-04 12:37:45.957833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-11-04 12:37:45.957848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-11-04 12:37:45.972770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-11-04 12:37:45.972785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-11-04 12:37:45.985496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-11-04 12:37:45.985511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-11-04 12:37:45.998037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-11-04 12:37:45.998051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-11-04 12:37:46.012404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-11-04 12:37:46.012419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-11-04 12:37:46.025180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-11-04 12:37:46.025195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.474 [2024-11-04 12:37:46.037928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.474 [2024-11-04 12:37:46.037942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-11-04 12:37:46.052686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-11-04 12:37:46.052701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-11-04 12:37:46.065595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-11-04 12:37:46.065610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-11-04 12:37:46.078345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-11-04 12:37:46.078359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-11-04 12:37:46.092581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-11-04 12:37:46.092597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-11-04 12:37:46.105626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-11-04 12:37:46.105641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-11-04 12:37:46.117363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-11-04 12:37:46.117378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-11-04 12:37:46.129761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-11-04 12:37:46.129776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-11-04 12:37:46.144479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-11-04 12:37:46.144493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-11-04 12:37:46.157758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-11-04 12:37:46.157772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-11-04 12:37:46.172622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-11-04 12:37:46.172637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-11-04 12:37:46.185653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-11-04 12:37:46.185667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-11-04 12:37:46.200641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-11-04 12:37:46.200656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-11-04 12:37:46.213542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-11-04 12:37:46.213557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 18854.00 IOPS, 147.30 MiB/s [2024-11-04T11:37:46.305Z] [2024-11-04 12:37:46.225777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-11-04 12:37:46.225792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-11-04 12:37:46.237454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-11-04 12:37:46.237469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-11-04 12:37:46.250140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-11-04 12:37:46.250154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-11-04 12:37:46.265025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-11-04 12:37:46.265040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-11-04 12:37:46.278117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-11-04 12:37:46.278131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.735 [2024-11-04 12:37:46.292819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.735 [2024-11-04 12:37:46.292834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.305582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.305597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.317289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.317303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.330273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.330288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.344508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.344522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.357132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.357147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.369868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.369886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.384625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.384640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.397646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.397660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.409868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.409882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.424850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.424864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.437770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.437784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.452443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.452457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.465285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.465300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.478008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.478022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.492636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.492651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.505406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.505421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.518230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.518244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.532465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.532480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.545257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.545271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.997 [2024-11-04 12:37:46.557526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.997 [2024-11-04 12:37:46.557540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.570418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.570433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.584821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.584836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.597590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.597605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.610645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.610659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.624454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.624472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.637657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.637672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.650301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.650315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.664824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.664839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.677260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.677274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.689839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.689853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.704398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.704412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.717131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.717146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.729613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.729628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.742439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.742453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.757484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.757499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.770364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.770378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.784927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.784941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.797933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.797948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.813291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.813306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.258 [2024-11-04 12:37:46.825982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.258 [2024-11-04 12:37:46.825996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.519 [2024-11-04 12:37:46.840841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.519 [2024-11-04 12:37:46.840856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.519 [2024-11-04 12:37:46.854160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.519 [2024-11-04 12:37:46.854175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.519 [2024-11-04 12:37:46.868953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.519 [2024-11-04 12:37:46.868968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.519 [2024-11-04 12:37:46.882022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.519 [2024-11-04 12:37:46.882040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.519 [2024-11-04 12:37:46.896685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.519 [2024-11-04 12:37:46.896700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.519 [2024-11-04 12:37:46.909666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.519 [2024-11-04 12:37:46.909680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.519 [2024-11-04 12:37:46.922008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.519 [2024-11-04 12:37:46.922022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.519 [2024-11-04 12:37:46.936759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.519 [2024-11-04 12:37:46.936773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.519 [2024-11-04 12:37:46.949298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.519 [2024-11-04 12:37:46.949312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.520 [2024-11-04 12:37:46.961628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.520 [2024-11-04 12:37:46.961642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.520 [2024-11-04 12:37:46.973974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.520 [2024-11-04 12:37:46.973988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.520 [2024-11-04 12:37:46.989353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.520 [2024-11-04 12:37:46.989369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.520 [2024-11-04 12:37:47.001762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.520 [2024-11-04 12:37:47.001776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.520 [2024-11-04 12:37:47.016291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.520 [2024-11-04 12:37:47.016306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.520 [2024-11-04 12:37:47.029423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.520 [2024-11-04 12:37:47.029437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.520 [2024-11-04 12:37:47.041990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.520 [2024-11-04 12:37:47.042003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.520 [2024-11-04 12:37:47.056959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.520 [2024-11-04 12:37:47.056974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.520 [2024-11-04 12:37:47.070258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.520 [2024-11-04 12:37:47.070273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.520 [2024-11-04 12:37:47.084517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.520 [2024-11-04 12:37:47.084532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.781 [2024-11-04 12:37:47.097499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.782 [2024-11-04 12:37:47.097514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.782 [2024-11-04 12:37:47.110305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.782 [2024-11-04 12:37:47.110319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.782 [2024-11-04 12:37:47.124927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.782 [2024-11-04 12:37:47.124942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.782 [2024-11-04 12:37:47.137703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.782 [2024-11-04 12:37:47.137718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.782 [2024-11-04 12:37:47.150097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.782 [2024-11-04 12:37:47.150111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.782 [2024-11-04 12:37:47.164440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.782 [2024-11-04 12:37:47.164454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.782 [2024-11-04 12:37:47.177030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.782 [2024-11-04 12:37:47.177045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.782 [2024-11-04 12:37:47.190336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.782 [2024-11-04 12:37:47.190350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.782 [2024-11-04 12:37:47.205247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.782 [2024-11-04 12:37:47.205262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.782 18881.00 IOPS, 147.51 MiB/s [2024-11-04T11:37:47.352Z] [2024-11-04 12:37:47.218023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.782 [2024-11-04 12:37:47.218037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.782 [2024-11-04 12:37:47.233038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.782 [2024-11-04 12:37:47.233052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.782 [2024-11-04 12:37:47.245963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.782 [2024-11-04 12:37:47.245977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.782 [2024-11-04 12:37:47.261050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.782 [2024-11-04 12:37:47.261065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.782 [2024-11-04 12:37:47.273737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.782 [2024-11-04 12:37:47.273755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.782 [2024-11-04 12:37:47.288796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.782 [2024-11-04 12:37:47.288811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.782 [2024-11-04 12:37:47.301941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.782 [2024-11-04 12:37:47.301955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.782 [2024-11-04 12:37:47.316656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.782 [2024-11-04 12:37:47.316671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.782 [2024-11-04 12:37:47.329275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.782 [2024-11-04 12:37:47.329290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.782 [2024-11-04 12:37:47.342184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.782 [2024-11-04 12:37:47.342199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.356836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.356851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.369531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.369546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.381513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.381527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.394269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.394283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.408360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.408375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.421132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.421147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.434025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.434039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.448775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.448790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.461560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.461575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.473925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.473939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.488684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.488699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.501389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.501403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.513833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.513847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.528494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.528508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.541719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.541734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.554259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.554274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.568965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.568980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.581803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.581818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.594557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.594572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.043 [2024-11-04 12:37:47.609078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.043 [2024-11-04 12:37:47.609093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.304 [2024-11-04 12:37:47.621470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.304 [2024-11-04 12:37:47.621486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.304 [2024-11-04 12:37:47.633862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.304 [2024-11-04 12:37:47.633880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.304 [2024-11-04 12:37:47.648687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.304 [2024-11-04 12:37:47.648701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.304 [2024-11-04 12:37:47.662024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.304 [2024-11-04 12:37:47.662039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.304 [2024-11-04 12:37:47.676529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.304 [2024-11-04 12:37:47.676543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.304 [2024-11-04 12:37:47.689376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.304 [2024-11-04 12:37:47.689391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.304 [2024-11-04 12:37:47.701677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.304 [2024-11-04 12:37:47.701693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.304 [2024-11-04 12:37:47.714350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.304 [2024-11-04 12:37:47.714365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.304 [2024-11-04 12:37:47.729315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.304 [2024-11-04 12:37:47.729329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.305 [2024-11-04 12:37:47.741858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.305 [2024-11-04 12:37:47.741873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.305 [2024-11-04 12:37:47.756800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.305 [2024-11-04 12:37:47.756815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.305 [2024-11-04 12:37:47.769806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.305 [2024-11-04 12:37:47.769820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.305 [2024-11-04 12:37:47.784318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.305 [2024-11-04 12:37:47.784333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.305 [2024-11-04 12:37:47.797407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.305 [2024-11-04 12:37:47.797421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.305 [2024-11-04 12:37:47.809701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.305 [2024-11-04 12:37:47.809715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.305 [2024-11-04 12:37:47.824271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.305 [2024-11-04 12:37:47.824286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.305 [2024-11-04 12:37:47.837380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.305 [2024-11-04 12:37:47.837394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.305 [2024-11-04 12:37:47.849870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.305 [2024-11-04 12:37:47.849884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.305 [2024-11-04 12:37:47.864704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.305 [2024-11-04 12:37:47.864718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.566 [2024-11-04 12:37:47.877402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.566 [2024-11-04 12:37:47.877417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.566 [2024-11-04 12:37:47.889699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.566 [2024-11-04 12:37:47.889718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.566 [2024-11-04 12:37:47.904538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.566 [2024-11-04 12:37:47.904552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.566 [2024-11-04 12:37:47.917307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.566 [2024-11-04 12:37:47.917322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.566 [2024-11-04 12:37:47.930901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.566 [2024-11-04 12:37:47.930915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.566 [2024-11-04 12:37:47.945567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.566 [2024-11-04 12:37:47.945581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.566 [2024-11-04 12:37:47.958117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.566 [2024-11-04 12:37:47.958131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.566 [2024-11-04 12:37:47.972915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.566 [2024-11-04 12:37:47.972929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.566 [2024-11-04 12:37:47.985907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.566 [2024-11-04 12:37:47.985921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.566 [2024-11-04 12:37:48.000468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.566 [2024-11-04 12:37:48.000483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.566 [2024-11-04 12:37:48.013132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.566 [2024-11-04 12:37:48.013147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.566 [2024-11-04 12:37:48.025835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.566 [2024-11-04 12:37:48.025849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.566 [2024-11-04 12:37:48.040766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.566 [2024-11-04 12:37:48.040782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.566 [2024-11-04 12:37:48.053461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.566 [2024-11-04 12:37:48.053476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.566 [2024-11-04 12:37:48.065866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.566 [2024-11-04 12:37:48.065881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.566 [2024-11-04 12:37:48.080965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.566 [2024-11-04 12:37:48.080980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.566 [2024-11-04 12:37:48.093844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.566 [2024-11-04 12:37:48.093858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.566 [2024-11-04 12:37:48.109096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.566 [2024-11-04 12:37:48.109111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.566 [2024-11-04 12:37:48.121781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.566 [2024-11-04 12:37:48.121795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.827 [2024-11-04 12:37:48.136574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.827 [2024-11-04 12:37:48.136589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.827 [2024-11-04 12:37:48.149326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.827 [2024-11-04 12:37:48.149345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.827 [2024-11-04 12:37:48.161923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.827 [2024-11-04 12:37:48.161937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.827 [2024-11-04 12:37:48.177327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.827 [2024-11-04 12:37:48.177342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.827 [2024-11-04 12:37:48.189920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.827 [2024-11-04 12:37:48.189935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.827 [2024-11-04 12:37:48.204894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.827 [2024-11-04 12:37:48.204909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.827 [2024-11-04 12:37:48.217490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.827 [2024-11-04 12:37:48.217504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.827 18924.00 IOPS, 147.84 MiB/s [2024-11-04T11:37:48.397Z] [2024-11-04 12:37:48.229392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.827 [2024-11-04 12:37:48.229406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.828 [2024-11-04 12:37:48.242105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.828 [2024-11-04 12:37:48.242119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.828 [2024-11-04 12:37:48.256668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.828 [2024-11-04 12:37:48.256683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.828 [2024-11-04 12:37:48.269867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.828 [2024-11-04 12:37:48.269881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.828 [2024-11-04 12:37:48.284734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.828 [2024-11-04 12:37:48.284753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.828 [2024-11-04 12:37:48.297694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.828 [2024-11-04 12:37:48.297708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.828 [2024-11-04 12:37:48.310040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.828 [2024-11-04 12:37:48.310054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.828 [2024-11-04 12:37:48.325320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.828 [2024-11-04 12:37:48.325334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.828 [2024-11-04 12:37:48.337504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.828 [2024-11-04 12:37:48.337519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.828 [2024-11-04 12:37:48.350076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.828 [2024-11-04 12:37:48.350090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.828 [2024-11-04 12:37:48.364894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.828 [2024-11-04 12:37:48.364909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.828 [2024-11-04 12:37:48.377793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.828 [2024-11-04 12:37:48.377807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.828 [2024-11-04 12:37:48.390449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.828 [2024-11-04 12:37:48.390464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.089 [2024-11-04 12:37:48.404389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.089 [2024-11-04 12:37:48.404404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.089 [2024-11-04 12:37:48.416909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.089 [2024-11-04 12:37:48.416924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.089 [2024-11-04 12:37:48.429783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.089 [2024-11-04 12:37:48.429797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.089 [2024-11-04 12:37:48.442075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.089 [2024-11-04 12:37:48.442089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.089 [2024-11-04 12:37:48.456953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.089 [2024-11-04 12:37:48.456968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.089 [2024-11-04 12:37:48.469851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.089 [2024-11-04 12:37:48.469865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.089 [2024-11-04 12:37:48.485079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.089 [2024-11-04 12:37:48.485094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.089 [2024-11-04 12:37:48.498413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.089 [2024-11-04 12:37:48.498427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.089 [2024-11-04 12:37:48.513078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.089 [2024-11-04 12:37:48.513093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.089 [2024-11-04 12:37:48.525831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.089 [2024-11-04 12:37:48.525844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.089 [2024-11-04 12:37:48.540567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.089 [2024-11-04 12:37:48.540582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.089 [2024-11-04 12:37:48.553589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.089 [2024-11-04 12:37:48.553603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.089 [2024-11-04 12:37:48.566319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.089 [2024-11-04 12:37:48.566333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.089 [2024-11-04 12:37:48.580135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.089 [2024-11-04 12:37:48.580150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.089 [2024-11-04 12:37:48.593150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.089 [2024-11-04 12:37:48.593164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.089 [2024-11-04 12:37:48.605672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.089 [2024-11-04 12:37:48.605686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.089 [2024-11-04 12:37:48.620707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.089 [2024-11-04 12:37:48.620722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.089 [2024-11-04 12:37:48.633358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.089 [2024-11-04 12:37:48.633373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.089 [2024-11-04 12:37:48.645848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.089 [2024-11-04 12:37:48.645863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.350 [2024-11-04 12:37:48.661143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.350 [2024-11-04 12:37:48.661158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.350 [2024-11-04 12:37:48.673697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.350 [2024-11-04 12:37:48.673711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.350 [2024-11-04 12:37:48.688733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.350 [2024-11-04 12:37:48.688754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.350 [2024-11-04 12:37:48.701689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.350 [2024-11-04 12:37:48.701702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.350 [2024-11-04 12:37:48.716991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.350 [2024-11-04 12:37:48.717005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.350 [2024-11-04 12:37:48.730165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.350 [2024-11-04 12:37:48.730178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.350 [2024-11-04 12:37:48.744728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.350 [2024-11-04 12:37:48.744743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.350 [2024-11-04 12:37:48.757790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.350 [2024-11-04 12:37:48.757805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.350 [2024-11-04 12:37:48.769710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.350 [2024-11-04 12:37:48.769724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.350 [2024-11-04 12:37:48.782011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.350 [2024-11-04 12:37:48.782026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.350 [2024-11-04 12:37:48.797182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.350 [2024-11-04 12:37:48.797197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.350 [2024-11-04 12:37:48.810129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.350 [2024-11-04 12:37:48.810143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.350 [2024-11-04 12:37:48.824710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.350 [2024-11-04 12:37:48.824724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.350 [2024-11-04 12:37:48.837858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.350 [2024-11-04 12:37:48.837872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.350 [2024-11-04 12:37:48.852998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.350 [2024-11-04 12:37:48.853012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.350 [2024-11-04 12:37:48.865736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.350 [2024-11-04 12:37:48.865755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.350 [2024-11-04 12:37:48.880641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.350 [2024-11-04 12:37:48.880656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.350 [2024-11-04 12:37:48.893358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.350 [2024-11-04 12:37:48.893373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.350 [2024-11-04 12:37:48.906067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.350 [2024-11-04 12:37:48.906081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.611 [2024-11-04 12:37:48.921005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.611 [2024-11-04 12:37:48.921021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.611 [2024-11-04 12:37:48.933819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.611 [2024-11-04 12:37:48.933833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.611 [2024-11-04 12:37:48.949053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.611 [2024-11-04 12:37:48.949067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.611 [2024-11-04 12:37:48.961405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.611 [2024-11-04 12:37:48.961419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.611 [2024-11-04 12:37:48.973840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.611 [2024-11-04 12:37:48.973854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.611 [2024-11-04 12:37:48.988598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.611 [2024-11-04 12:37:48.988613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.611 [2024-11-04 12:37:49.001701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.611 [2024-11-04 12:37:49.001715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.611 [2024-11-04 12:37:49.013065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.611 [2024-11-04 12:37:49.013079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.611 [2024-11-04 12:37:49.025902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.611 [2024-11-04 12:37:49.025917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.611 [2024-11-04 12:37:49.041106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.611 [2024-11-04 12:37:49.041121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.611 [2024-11-04 12:37:49.053626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.611 [2024-11-04 12:37:49.053641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.611 [2024-11-04 12:37:49.066285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.611 [2024-11-04 12:37:49.066299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.611 [2024-11-04 12:37:49.080843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.612 [2024-11-04 12:37:49.080857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.612 [2024-11-04 12:37:49.094137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.612 [2024-11-04 12:37:49.094151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.612 [2024-11-04 12:37:49.109056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.612 [2024-11-04 12:37:49.109071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.612 [2024-11-04 12:37:49.121696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.612 [2024-11-04 12:37:49.121711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.612 [2024-11-04 12:37:49.136310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.612 [2024-11-04 12:37:49.136325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.612 [2024-11-04 12:37:49.149140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.612 [2024-11-04 12:37:49.149155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.612 [2024-11-04 12:37:49.161964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.612 [2024-11-04 12:37:49.161979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.612 [2024-11-04 12:37:49.177311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.612 [2024-11-04 12:37:49.177326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.873 [2024-11-04 12:37:49.190095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.873 [2024-11-04 12:37:49.190109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.873 [2024-11-04 12:37:49.204899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.873 [2024-11-04 12:37:49.204913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.873 [2024-11-04 12:37:49.217716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.873 [2024-11-04 12:37:49.217731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.873 18921.25 IOPS, 147.82 MiB/s [2024-11-04T11:37:49.443Z] [2024-11-04 12:37:49.232513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.873 [2024-11-04 12:37:49.232529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.873 [2024-11-04 12:37:49.245468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.873 [2024-11-04 12:37:49.245485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.873 [2024-11-04 12:37:49.258294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.873 [2024-11-04 12:37:49.258308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.873 [2024-11-04 12:37:49.272948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.873 [2024-11-04 12:37:49.272962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.873 [2024-11-04 12:37:49.285852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.873 [2024-11-04 12:37:49.285866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.873 [2024-11-04 12:37:49.300445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.873 [2024-11-04 12:37:49.300460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.873 [2024-11-04 12:37:49.313681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.873 [2024-11-04 12:37:49.313696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.873 [2024-11-04 12:37:49.325491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.873 [2024-11-04 12:37:49.325506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.873 [2024-11-04 12:37:49.337831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.873 [2024-11-04 12:37:49.337845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.873 [2024-11-04 12:37:49.352513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.873 [2024-11-04 12:37:49.352528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.873 [2024-11-04 12:37:49.365601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.873 [2024-11-04 12:37:49.365616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.873 [2024-11-04 12:37:49.377946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.873 [2024-11-04 12:37:49.377960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.873 [2024-11-04 12:37:49.392400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.873 [2024-11-04 12:37:49.392415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.873 [2024-11-04 12:37:49.405505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.873 [2024-11-04 12:37:49.405520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.873 [2024-11-04 12:37:49.417882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.873 [2024-11-04 12:37:49.417900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.873 [2024-11-04 12:37:49.432950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.873 [2024-11-04 12:37:49.432965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.135 [2024-11-04 12:37:49.445667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.135 [2024-11-04 12:37:49.445682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.135 [2024-11-04 12:37:49.458187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.135 [2024-11-04 12:37:49.458204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.135 [2024-11-04 12:37:49.472984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.135 [2024-11-04 12:37:49.473000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.135 [2024-11-04 12:37:49.486354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.135 [2024-11-04 12:37:49.486369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.135 [2024-11-04 12:37:49.501497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.135 [2024-11-04 12:37:49.501512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.135 [2024-11-04 12:37:49.514252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.135 [2024-11-04 12:37:49.514267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.135 [2024-11-04 12:37:49.528832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.135 [2024-11-04 12:37:49.528846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.135 [2024-11-04 12:37:49.541660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.135 [2024-11-04 12:37:49.541674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.135 [2024-11-04 12:37:49.554041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.135 [2024-11-04 12:37:49.554055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.135 [2024-11-04 12:37:49.568503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.135 [2024-11-04 12:37:49.568518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.135 [2024-11-04 12:37:49.581727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.135 [2024-11-04 12:37:49.581742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.135 [2024-11-04 12:37:49.594015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.135 [2024-11-04 12:37:49.594029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.135 [2024-11-04 12:37:49.608512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.135 [2024-11-04 12:37:49.608527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.135 [2024-11-04 12:37:49.621332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.135 [2024-11-04 12:37:49.621346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.135 [2024-11-04 12:37:49.634139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.135 [2024-11-04 12:37:49.634154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.135 [2024-11-04 12:37:49.648930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.135 [2024-11-04 12:37:49.648945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.135 [2024-11-04 12:37:49.661994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.135 [2024-11-04 12:37:49.662008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.135 [2024-11-04 12:37:49.677022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.135 [2024-11-04 12:37:49.677041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.135 [2024-11-04 12:37:49.689979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.136 [2024-11-04 12:37:49.689993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.704687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.704702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.718081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.718095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.732639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.732654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.745573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.745588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.757438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.757453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.770068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.770083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.784054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.784069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.797586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.797601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.809811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.809825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.824382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.824396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.837715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.837729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.853093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.853108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.866047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.866062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.880613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.880627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.893616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.893630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.906319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.906334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.921172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.921186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.933821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.933842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.948729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.948744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.398 [2024-11-04 12:37:49.961466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.398 [2024-11-04 12:37:49.961481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.659 [2024-11-04 12:37:49.974145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.659 [2024-11-04 12:37:49.974160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.659 [2024-11-04 12:37:49.988856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.659 [2024-11-04 12:37:49.988871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.659 [2024-11-04 12:37:50.002518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.659 [2024-11-04 12:37:50.002534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.659 [2024-11-04 12:37:50.016656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.659 [2024-11-04 12:37:50.016675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.659 [2024-11-04 12:37:50.029380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.659 [2024-11-04 12:37:50.029395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.659 [2024-11-04 12:37:50.041650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.659 [2024-11-04 12:37:50.041666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.659 [2024-11-04 12:37:50.054558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.659 [2024-11-04 12:37:50.054573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.659 [2024-11-04 12:37:50.069130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.659 [2024-11-04 12:37:50.069146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.659 [2024-11-04 12:37:50.081807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.659 [2024-11-04 12:37:50.081822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.659 [2024-11-04 12:37:50.097054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.659 [2024-11-04 12:37:50.097069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.659 [2024-11-04 12:37:50.110060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.659 [2024-11-04 12:37:50.110075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.659 [2024-11-04 12:37:50.124951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.659 [2024-11-04 12:37:50.124968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.659 [2024-11-04 12:37:50.137701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.659 [2024-11-04 12:37:50.137716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.659 [2024-11-04 12:37:50.149820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.659 [2024-11-04 12:37:50.149835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.659 [2024-11-04 12:37:50.164784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.659 [2024-11-04 12:37:50.164799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.659 [2024-11-04 12:37:50.178249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.659 [2024-11-04 12:37:50.178264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.659 [2024-11-04 12:37:50.193157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.659 [2024-11-04 12:37:50.193173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.659 [2024-11-04 12:37:50.206154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.659 [2024-11-04 12:37:50.206169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.660 [2024-11-04 12:37:50.220495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.660 [2024-11-04 12:37:50.220509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.921 18920.80 IOPS, 147.82 MiB/s [2024-11-04T11:37:50.491Z] [2024-11-04 12:37:50.229539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.921 [2024-11-04 12:37:50.229553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.921 00:33:15.921 Latency(us) 00:33:15.921 [2024-11-04T11:37:50.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.921 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:15.921 Nvme1n1 : 5.01 18922.75 147.83 0.00 0.00 6757.68 2539.52 12178.77 00:33:15.921 [2024-11-04T11:37:50.491Z] =================================================================================================================== 00:33:15.921 [2024-11-04T11:37:50.491Z] Total : 18922.75 147.83 0.00 0.00 6757.68 2539.52 12178.77 00:33:15.921 [2024-11-04 12:37:50.241537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.921 [2024-11-04 12:37:50.241551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.921 [2024-11-04 12:37:50.253540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.921 [2024-11-04 12:37:50.253553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.921 [2024-11-04 12:37:50.265540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.921 [2024-11-04 12:37:50.265553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.921 [2024-11-04 12:37:50.277539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.921 [2024-11-04 12:37:50.277551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.921 [2024-11-04 12:37:50.289535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.921 [2024-11-04 12:37:50.289545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.921 [2024-11-04 12:37:50.301532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.921 [2024-11-04 12:37:50.301541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.921 [2024-11-04 12:37:50.313533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.921 [2024-11-04 12:37:50.313541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.921 [2024-11-04 12:37:50.325535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.921 [2024-11-04 12:37:50.325546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.921 [2024-11-04 12:37:50.337532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:15.921 [2024-11-04 12:37:50.337541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:15.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1898132) - No such process 00:33:15.921 12:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1898132 00:33:15.921 12:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:15.921 12:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.921 12:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:15.921 12:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.921 12:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:15.921 12:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.921 12:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:15.921 delay0 00:33:15.921 12:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.921 12:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:15.921 12:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.921 12:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:15.921 12:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.921 12:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:15.921 [2024-11-04 12:37:50.434470] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:22.511 Initializing NVMe Controllers 00:33:22.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:22.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:22.511 Initialization complete. Launching workers. 00:33:22.511 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 295, failed: 8870 00:33:22.511 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 9107, failed to submit 58 00:33:22.511 success 8943, unsuccessful 164, failed 0 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:22.511 rmmod nvme_tcp 00:33:22.511 rmmod nvme_fabrics 00:33:22.511 rmmod nvme_keyring 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1895953 ']' 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1895953 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1895953 ']' 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1895953 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1895953 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1895953' 00:33:22.511 killing process with pid 1895953 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1895953 00:33:22.511 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1895953 00:33:22.773 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:22.773 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:22.773 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:22.773 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:22.773 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:33:22.773 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:22.773 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:33:22.773 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:22.773 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:22.773 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.773 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.773 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.688 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:24.688 00:33:24.688 real 0m33.152s 00:33:24.688 user 0m42.568s 00:33:24.688 sys 0m11.685s 00:33:24.688 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:24.688 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:24.688 ************************************ 00:33:24.688 END TEST nvmf_zcopy 00:33:24.688 ************************************ 00:33:24.688 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:24.688 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:24.688 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:24.688 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:24.688 ************************************ 00:33:24.688 START TEST nvmf_nmic 00:33:24.688 ************************************ 00:33:24.689 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:24.950 * Looking for test storage... 00:33:24.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:24.950 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:24.950 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:33:24.950 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:24.950 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:24.950 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:24.950 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:24.950 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:24.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.951 --rc genhtml_branch_coverage=1 00:33:24.951 --rc genhtml_function_coverage=1 00:33:24.951 --rc genhtml_legend=1 00:33:24.951 --rc geninfo_all_blocks=1 00:33:24.951 --rc geninfo_unexecuted_blocks=1 00:33:24.951 00:33:24.951 ' 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:24.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.951 --rc genhtml_branch_coverage=1 00:33:24.951 --rc genhtml_function_coverage=1 00:33:24.951 --rc genhtml_legend=1 00:33:24.951 --rc geninfo_all_blocks=1 00:33:24.951 --rc geninfo_unexecuted_blocks=1 00:33:24.951 00:33:24.951 ' 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:24.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.951 --rc genhtml_branch_coverage=1 00:33:24.951 --rc genhtml_function_coverage=1 00:33:24.951 --rc genhtml_legend=1 00:33:24.951 --rc geninfo_all_blocks=1 00:33:24.951 --rc geninfo_unexecuted_blocks=1 00:33:24.951 00:33:24.951 ' 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:24.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.951 --rc genhtml_branch_coverage=1 00:33:24.951 --rc genhtml_function_coverage=1 00:33:24.951 --rc genhtml_legend=1 00:33:24.951 --rc geninfo_all_blocks=1 00:33:24.951 --rc geninfo_unexecuted_blocks=1 00:33:24.951 00:33:24.951 ' 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:24.951 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:24.952 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:33.098 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:33.098 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:33.098 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:33.098 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:33.098 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:33.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:33.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:33:33.099 00:33:33.099 --- 10.0.0.2 ping statistics --- 00:33:33.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.099 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:33.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:33.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:33:33.099 00:33:33.099 --- 10.0.0.1 ping statistics --- 00:33:33.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.099 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1904466 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1904466 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1904466 ']' 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:33.099 12:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:33.099 [2024-11-04 12:38:06.849567] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:33.099 [2024-11-04 12:38:06.850719] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:33:33.099 [2024-11-04 12:38:06.850781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:33.099 [2024-11-04 12:38:06.922086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:33.099 [2024-11-04 12:38:06.965828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:33.099 [2024-11-04 12:38:06.965866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:33.099 [2024-11-04 12:38:06.965875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:33.099 [2024-11-04 12:38:06.965884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:33.099 [2024-11-04 12:38:06.965891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:33.099 [2024-11-04 12:38:06.967706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.099 [2024-11-04 12:38:06.967917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.099 [2024-11-04 12:38:06.967918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:33.099 [2024-11-04 12:38:06.967763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:33.099 [2024-11-04 12:38:07.024177] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:33.099 [2024-11-04 12:38:07.024286] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:33.099 [2024-11-04 12:38:07.025253] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:33.099 [2024-11-04 12:38:07.026082] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:33.099 [2024-11-04 12:38:07.026154] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:33.099 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:33.099 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:33:33.099 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:33.099 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:33.099 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:33.361 [2024-11-04 12:38:07.696678] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:33.361 Malloc0 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:33.361 [2024-11-04 12:38:07.768564] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:33.361 test case1: single bdev can't be used in multiple subsystems 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.361 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:33.361 [2024-11-04 12:38:07.804304] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:33.361 [2024-11-04 12:38:07.804325] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:33.361 [2024-11-04 12:38:07.804333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.362 request: 00:33:33.362 { 00:33:33.362 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:33.362 "namespace": { 00:33:33.362 "bdev_name": "Malloc0", 00:33:33.362 "no_auto_visible": false 00:33:33.362 }, 00:33:33.362 "method": "nvmf_subsystem_add_ns", 00:33:33.362 "req_id": 1 00:33:33.362 } 00:33:33.362 Got JSON-RPC error response 00:33:33.362 response: 00:33:33.362 { 00:33:33.362 "code": -32602, 00:33:33.362 "message": "Invalid parameters" 00:33:33.362 } 00:33:33.362 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:33.362 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:33.362 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:33.362 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:33.362 Adding namespace failed - expected result. 00:33:33.362 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:33.362 test case2: host connect to nvmf target in multiple paths 00:33:33.362 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:33.362 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.362 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:33.362 [2024-11-04 12:38:07.816414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:33.362 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.362 12:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:33.622 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:34.319 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:34.319 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:33:34.319 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:34.319 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:33:34.319 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:33:36.269 12:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:36.269 12:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:36.269 12:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:36.270 12:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:33:36.270 12:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:36.270 12:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:33:36.270 12:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:36.270 [global] 00:33:36.270 thread=1 00:33:36.270 invalidate=1 00:33:36.270 rw=write 00:33:36.270 time_based=1 00:33:36.270 runtime=1 00:33:36.270 ioengine=libaio 00:33:36.270 direct=1 00:33:36.270 bs=4096 00:33:36.270 iodepth=1 00:33:36.270 norandommap=0 00:33:36.270 numjobs=1 00:33:36.270 00:33:36.270 verify_dump=1 00:33:36.270 verify_backlog=512 00:33:36.270 verify_state_save=0 00:33:36.270 do_verify=1 00:33:36.270 verify=crc32c-intel 00:33:36.270 [job0] 00:33:36.270 filename=/dev/nvme0n1 00:33:36.270 Could not set queue depth (nvme0n1) 00:33:36.532 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:36.532 fio-3.35 00:33:36.532 Starting 1 thread 00:33:37.918 00:33:37.918 job0: (groupid=0, jobs=1): err= 0: pid=1905410: Mon Nov 4 12:38:12 2024 00:33:37.918 read: IOPS=78, BW=316KiB/s (323kB/s)(316KiB/1001msec) 00:33:37.918 slat (nsec): min=9227, max=75061, avg=26721.84, stdev=7201.64 00:33:37.918 clat (usec): min=815, max=41994, avg=8185.10, stdev=15560.72 00:33:37.918 lat (usec): min=832, max=42021, avg=8211.82, stdev=15560.61 00:33:37.918 clat percentiles (usec): 00:33:37.918 | 1.00th=[ 816], 5.00th=[ 832], 10.00th=[ 898], 20.00th=[ 963], 00:33:37.918 | 30.00th=[ 996], 40.00th=[ 1029], 50.00th=[ 1037], 60.00th=[ 1057], 00:33:37.918 | 70.00th=[ 1074], 80.00th=[ 1205], 90.00th=[41681], 95.00th=[42206], 00:33:37.918 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:37.918 | 99.99th=[42206] 00:33:37.918 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:33:37.918 slat (usec): min=10, max=28630, avg=86.42, stdev=1263.98 00:33:37.918 clat (usec): min=219, max=801, avg=592.61, stdev=98.51 00:33:37.918 lat (usec): min=254, max=29377, avg=679.03, stdev=1274.95 00:33:37.918 clat percentiles (usec): 00:33:37.918 | 1.00th=[ 338], 5.00th=[ 404], 10.00th=[ 457], 20.00th=[ 510], 00:33:37.918 | 30.00th=[ 562], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 619], 00:33:37.918 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 709], 95.00th=[ 734], 00:33:37.918 | 99.00th=[ 766], 99.50th=[ 783], 99.90th=[ 799], 99.95th=[ 799], 00:33:37.918 | 99.99th=[ 799] 00:33:37.918 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:33:37.918 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:37.918 lat (usec) : 250=0.17%, 500=15.23%, 750=68.36%, 1000=7.28% 00:33:37.918 lat (msec) : 2=6.60%, 50=2.37% 00:33:37.918 cpu : usr=0.90%, sys=1.70%, ctx=595, majf=0, minf=1 00:33:37.918 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:37.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:37.918 issued rwts: total=79,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:37.918 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:37.918 00:33:37.918 Run status group 0 (all jobs): 00:33:37.918 READ: bw=316KiB/s (323kB/s), 316KiB/s-316KiB/s (323kB/s-323kB/s), io=316KiB (324kB), run=1001-1001msec 00:33:37.918 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:33:37.918 00:33:37.918 Disk stats (read/write): 00:33:37.918 nvme0n1: ios=42/512, merge=0/0, ticks=1531/288, in_queue=1819, util=98.70% 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:37.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:37.918 rmmod nvme_tcp 00:33:37.918 rmmod nvme_fabrics 00:33:37.918 rmmod nvme_keyring 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1904466 ']' 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1904466 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1904466 ']' 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1904466 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1904466 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1904466' 00:33:37.918 killing process with pid 1904466 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1904466 00:33:37.918 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1904466 00:33:38.179 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:38.179 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:38.179 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:38.179 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:38.179 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:33:38.179 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:33:38.179 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:38.179 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:38.179 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:38.179 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.179 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.179 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.727 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:40.727 00:33:40.728 real 0m15.424s 00:33:40.728 user 0m32.993s 00:33:40.728 sys 0m7.473s 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:40.728 ************************************ 00:33:40.728 END TEST nvmf_nmic 00:33:40.728 ************************************ 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:40.728 ************************************ 00:33:40.728 START TEST nvmf_fio_target 00:33:40.728 ************************************ 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:40.728 * Looking for test storage... 00:33:40.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:40.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.728 --rc genhtml_branch_coverage=1 00:33:40.728 --rc genhtml_function_coverage=1 00:33:40.728 --rc genhtml_legend=1 00:33:40.728 --rc geninfo_all_blocks=1 00:33:40.728 --rc geninfo_unexecuted_blocks=1 00:33:40.728 00:33:40.728 ' 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:40.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.728 --rc genhtml_branch_coverage=1 00:33:40.728 --rc genhtml_function_coverage=1 00:33:40.728 --rc genhtml_legend=1 00:33:40.728 --rc geninfo_all_blocks=1 00:33:40.728 --rc geninfo_unexecuted_blocks=1 00:33:40.728 00:33:40.728 ' 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:40.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.728 --rc genhtml_branch_coverage=1 00:33:40.728 --rc genhtml_function_coverage=1 00:33:40.728 --rc genhtml_legend=1 00:33:40.728 --rc geninfo_all_blocks=1 00:33:40.728 --rc geninfo_unexecuted_blocks=1 00:33:40.728 00:33:40.728 ' 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:40.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.728 --rc genhtml_branch_coverage=1 00:33:40.728 --rc genhtml_function_coverage=1 00:33:40.728 --rc genhtml_legend=1 00:33:40.728 --rc geninfo_all_blocks=1 00:33:40.728 --rc geninfo_unexecuted_blocks=1 00:33:40.728 00:33:40.728 ' 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.728 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:40.729 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:47.316 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:47.317 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:47.317 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:47.317 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:47.317 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:47.317 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:47.579 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:47.579 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:47.579 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:47.579 12:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:47.579 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:47.579 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:47.579 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:47.579 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:47.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:47.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:33:47.579 00:33:47.579 --- 10.0.0.2 ping statistics --- 00:33:47.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.579 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:33:47.579 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:47.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:47.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:33:47.579 00:33:47.579 --- 10.0.0.1 ping statistics --- 00:33:47.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.579 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:33:47.579 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:47.579 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:33:47.579 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:47.579 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:47.579 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:47.579 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:47.579 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:47.579 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:47.580 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:47.580 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:33:47.580 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:47.580 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:47.580 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:47.580 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1909819 00:33:47.580 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1909819 00:33:47.580 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:47.580 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1909819 ']' 00:33:47.580 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:47.580 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:47.580 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:47.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:47.580 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:47.580 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:47.580 [2024-11-04 12:38:22.146660] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:47.580 [2024-11-04 12:38:22.147706] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:33:47.580 [2024-11-04 12:38:22.147759] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.840 [2024-11-04 12:38:22.217887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:47.840 [2024-11-04 12:38:22.259540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:47.840 [2024-11-04 12:38:22.259577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:47.840 [2024-11-04 12:38:22.259585] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:47.840 [2024-11-04 12:38:22.259592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:47.840 [2024-11-04 12:38:22.259598] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:47.840 [2024-11-04 12:38:22.261379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.840 [2024-11-04 12:38:22.261505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:47.840 [2024-11-04 12:38:22.261672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:47.840 [2024-11-04 12:38:22.261673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:47.840 [2024-11-04 12:38:22.317521] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:47.840 [2024-11-04 12:38:22.317548] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:47.840 [2024-11-04 12:38:22.318519] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:47.840 [2024-11-04 12:38:22.319071] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:47.840 [2024-11-04 12:38:22.319209] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:48.412 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:48.412 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:33:48.412 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:48.412 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:48.412 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:48.412 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:48.672 12:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:48.672 [2024-11-04 12:38:23.138184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:48.672 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:48.932 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:33:48.932 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:49.192 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:33:49.192 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:49.453 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:33:49.453 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:49.453 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:33:49.453 12:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:33:49.713 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:49.973 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:33:49.974 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:49.974 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:33:49.974 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:50.234 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:33:50.234 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:33:50.493 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:50.493 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:50.493 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:50.753 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:50.753 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:51.014 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:51.014 [2024-11-04 12:38:25.542286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:51.014 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:33:51.274 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:33:51.535 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:51.794 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:33:51.795 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:33:51.795 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:51.795 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:33:51.795 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:33:51.795 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:33:54.337 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:54.337 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:54.337 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:54.337 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:33:54.337 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:54.337 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:33:54.337 12:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:54.337 [global] 00:33:54.337 thread=1 00:33:54.337 invalidate=1 00:33:54.337 rw=write 00:33:54.338 time_based=1 00:33:54.338 runtime=1 00:33:54.338 ioengine=libaio 00:33:54.338 direct=1 00:33:54.338 bs=4096 00:33:54.338 iodepth=1 00:33:54.338 norandommap=0 00:33:54.338 numjobs=1 00:33:54.338 00:33:54.338 verify_dump=1 00:33:54.338 verify_backlog=512 00:33:54.338 verify_state_save=0 00:33:54.338 do_verify=1 00:33:54.338 verify=crc32c-intel 00:33:54.338 [job0] 00:33:54.338 filename=/dev/nvme0n1 00:33:54.338 [job1] 00:33:54.338 filename=/dev/nvme0n2 00:33:54.338 [job2] 00:33:54.338 filename=/dev/nvme0n3 00:33:54.338 [job3] 00:33:54.338 filename=/dev/nvme0n4 00:33:54.338 Could not set queue depth (nvme0n1) 00:33:54.338 Could not set queue depth (nvme0n2) 00:33:54.338 Could not set queue depth (nvme0n3) 00:33:54.338 Could not set queue depth (nvme0n4) 00:33:54.338 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.338 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.338 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.338 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.338 fio-3.35 00:33:54.338 Starting 4 threads 00:33:55.725 00:33:55.725 job0: (groupid=0, jobs=1): err= 0: pid=1911262: Mon Nov 4 12:38:30 2024 00:33:55.725 read: IOPS=363, BW=1455KiB/s (1489kB/s)(1456KiB/1001msec) 00:33:55.725 slat (nsec): min=25762, max=50869, avg=27283.11, stdev=2899.40 00:33:55.725 clat (usec): min=765, max=41994, avg=1789.83, stdev=5618.23 00:33:55.725 lat (usec): min=792, max=42021, avg=1817.11, stdev=5618.13 00:33:55.725 clat percentiles (usec): 00:33:55.725 | 1.00th=[ 824], 5.00th=[ 898], 10.00th=[ 922], 20.00th=[ 955], 00:33:55.725 | 30.00th=[ 979], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1020], 00:33:55.725 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1123], 00:33:55.725 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:55.725 | 99.99th=[42206] 00:33:55.725 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:33:55.725 slat (nsec): min=9198, max=75398, avg=30317.71, stdev=10287.60 00:33:55.725 clat (usec): min=178, max=1087, avg=617.96, stdev=142.78 00:33:55.725 lat (usec): min=192, max=1121, avg=648.28, stdev=146.70 00:33:55.725 clat percentiles (usec): 00:33:55.725 | 1.00th=[ 351], 5.00th=[ 400], 10.00th=[ 445], 20.00th=[ 490], 00:33:55.725 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 611], 60.00th=[ 652], 00:33:55.725 | 70.00th=[ 693], 80.00th=[ 742], 90.00th=[ 807], 95.00th=[ 865], 00:33:55.725 | 99.00th=[ 955], 99.50th=[ 1037], 99.90th=[ 1090], 99.95th=[ 1090], 00:33:55.725 | 99.99th=[ 1090] 00:33:55.725 bw ( KiB/s): min= 4087, max= 4087, per=34.03%, avg=4087.00, stdev= 0.00, samples=1 00:33:55.725 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:33:55.725 lat (usec) : 250=0.11%, 500=14.16%, 750=33.79%, 1000=30.59% 00:33:55.725 lat (msec) : 2=20.55%, 50=0.80% 00:33:55.725 cpu : usr=1.80%, sys=3.40%, ctx=876, majf=0, minf=1 00:33:55.725 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:55.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.725 issued rwts: total=364,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:55.725 job1: (groupid=0, jobs=1): err= 0: pid=1911266: Mon Nov 4 12:38:30 2024 00:33:55.725 read: IOPS=1045, BW=4184KiB/s (4284kB/s)(4188KiB/1001msec) 00:33:55.725 slat (nsec): min=6746, max=62586, avg=23416.40, stdev=8470.27 00:33:55.725 clat (usec): min=178, max=42011, avg=595.76, stdev=1364.59 00:33:55.725 lat (usec): min=185, max=42041, avg=619.18, stdev=1365.12 00:33:55.725 clat percentiles (usec): 00:33:55.725 | 1.00th=[ 215], 5.00th=[ 249], 10.00th=[ 326], 20.00th=[ 375], 00:33:55.725 | 30.00th=[ 420], 40.00th=[ 498], 50.00th=[ 553], 60.00th=[ 603], 00:33:55.725 | 70.00th=[ 627], 80.00th=[ 652], 90.00th=[ 807], 95.00th=[ 930], 00:33:55.725 | 99.00th=[ 1045], 99.50th=[ 1074], 99.90th=[14484], 99.95th=[42206], 00:33:55.725 | 99.99th=[42206] 00:33:55.725 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:33:55.725 slat (nsec): min=9676, max=54674, avg=17570.77, stdev=11049.48 00:33:55.725 clat (usec): min=98, max=505, avg=202.58, stdev=105.77 00:33:55.725 lat (usec): min=108, max=540, avg=220.15, stdev=112.80 00:33:55.726 clat percentiles (usec): 00:33:55.726 | 1.00th=[ 102], 5.00th=[ 104], 10.00th=[ 108], 20.00th=[ 111], 00:33:55.726 | 30.00th=[ 114], 40.00th=[ 119], 50.00th=[ 127], 60.00th=[ 239], 00:33:55.726 | 70.00th=[ 277], 80.00th=[ 310], 90.00th=[ 347], 95.00th=[ 392], 00:33:55.726 | 99.00th=[ 465], 99.50th=[ 474], 99.90th=[ 502], 99.95th=[ 506], 00:33:55.726 | 99.99th=[ 506] 00:33:55.726 bw ( KiB/s): min= 6736, max= 6736, per=56.08%, avg=6736.00, stdev= 0.00, samples=1 00:33:55.726 iops : min= 1684, max= 1684, avg=1684.00, stdev= 0.00, samples=1 00:33:55.726 lat (usec) : 100=0.23%, 250=38.68%, 500=36.93%, 750=19.71%, 1000=3.68% 00:33:55.726 lat (msec) : 2=0.70%, 20=0.04%, 50=0.04% 00:33:55.726 cpu : usr=2.50%, sys=5.80%, ctx=2586, majf=0, minf=1 00:33:55.726 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:55.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.726 issued rwts: total=1047,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.726 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:55.726 job2: (groupid=0, jobs=1): err= 0: pid=1911272: Mon Nov 4 12:38:30 2024 00:33:55.726 read: IOPS=237, BW=950KiB/s (973kB/s)(960KiB/1010msec) 00:33:55.726 slat (nsec): min=7169, max=53568, avg=27242.83, stdev=4776.40 00:33:55.726 clat (usec): min=659, max=41929, avg=3176.94, stdev=9136.22 00:33:55.726 lat (usec): min=687, max=41955, avg=3204.19, stdev=9135.99 00:33:55.726 clat percentiles (usec): 00:33:55.726 | 1.00th=[ 717], 5.00th=[ 799], 10.00th=[ 840], 20.00th=[ 914], 00:33:55.726 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1020], 00:33:55.726 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1139], 95.00th=[41157], 00:33:55.726 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:33:55.726 | 99.99th=[41681] 00:33:55.726 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:33:55.726 slat (nsec): min=9427, max=51687, avg=27599.62, stdev=10368.87 00:33:55.726 clat (usec): min=233, max=760, avg=430.98, stdev=79.78 00:33:55.726 lat (usec): min=255, max=793, avg=458.58, stdev=85.12 00:33:55.726 clat percentiles (usec): 00:33:55.726 | 1.00th=[ 260], 5.00th=[ 289], 10.00th=[ 318], 20.00th=[ 355], 00:33:55.726 | 30.00th=[ 383], 40.00th=[ 424], 50.00th=[ 445], 60.00th=[ 461], 00:33:55.726 | 70.00th=[ 478], 80.00th=[ 494], 90.00th=[ 529], 95.00th=[ 545], 00:33:55.726 | 99.00th=[ 594], 99.50th=[ 701], 99.90th=[ 758], 99.95th=[ 758], 00:33:55.726 | 99.99th=[ 758] 00:33:55.726 bw ( KiB/s): min= 4096, max= 4096, per=34.10%, avg=4096.00, stdev= 0.00, samples=1 00:33:55.726 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:55.726 lat (usec) : 250=0.40%, 500=55.59%, 750=12.37%, 1000=15.03% 00:33:55.726 lat (msec) : 2=14.76%, 4=0.13%, 50=1.73% 00:33:55.726 cpu : usr=1.19%, sys=2.28%, ctx=752, majf=0, minf=1 00:33:55.726 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:55.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.726 issued rwts: total=240,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.726 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:55.726 job3: (groupid=0, jobs=1): err= 0: pid=1911276: Mon Nov 4 12:38:30 2024 00:33:55.726 read: IOPS=16, BW=66.5KiB/s (68.1kB/s)(68.0KiB/1023msec) 00:33:55.726 slat (nsec): min=26584, max=27763, avg=26928.35, stdev=323.91 00:33:55.726 clat (usec): min=40981, max=42112, avg=41783.56, stdev=360.42 00:33:55.726 lat (usec): min=41008, max=42139, avg=41810.48, stdev=360.40 00:33:55.726 clat percentiles (usec): 00:33:55.726 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:33:55.726 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:33:55.726 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:55.726 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:55.726 | 99.99th=[42206] 00:33:55.726 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:33:55.726 slat (nsec): min=10089, max=57570, avg=31583.06, stdev=9809.42 00:33:55.726 clat (usec): min=238, max=1037, avg=567.83, stdev=141.40 00:33:55.726 lat (usec): min=273, max=1074, avg=599.41, stdev=145.14 00:33:55.726 clat percentiles (usec): 00:33:55.726 | 1.00th=[ 318], 5.00th=[ 359], 10.00th=[ 375], 20.00th=[ 457], 00:33:55.726 | 30.00th=[ 486], 40.00th=[ 510], 50.00th=[ 545], 60.00th=[ 603], 00:33:55.726 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 750], 95.00th=[ 816], 00:33:55.726 | 99.00th=[ 930], 99.50th=[ 971], 99.90th=[ 1037], 99.95th=[ 1037], 00:33:55.726 | 99.99th=[ 1037] 00:33:55.726 bw ( KiB/s): min= 4096, max= 4096, per=34.10%, avg=4096.00, stdev= 0.00, samples=1 00:33:55.726 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:55.726 lat (usec) : 250=0.38%, 500=34.59%, 750=52.74%, 1000=8.88% 00:33:55.726 lat (msec) : 2=0.19%, 50=3.21% 00:33:55.726 cpu : usr=0.68%, sys=1.66%, ctx=531, majf=0, minf=1 00:33:55.726 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:55.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:55.726 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:55.726 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:55.726 00:33:55.726 Run status group 0 (all jobs): 00:33:55.726 READ: bw=6522KiB/s (6679kB/s), 66.5KiB/s-4184KiB/s (68.1kB/s-4284kB/s), io=6672KiB (6832kB), run=1001-1023msec 00:33:55.726 WRITE: bw=11.7MiB/s (12.3MB/s), 2002KiB/s-6138KiB/s (2050kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1023msec 00:33:55.726 00:33:55.726 Disk stats (read/write): 00:33:55.726 nvme0n1: ios=253/512, merge=0/0, ticks=527/244, in_queue=771, util=86.97% 00:33:55.726 nvme0n2: ios=1053/1024, merge=0/0, ticks=801/176, in_queue=977, util=96.83% 00:33:55.726 nvme0n3: ios=235/512, merge=0/0, ticks=546/214, in_queue=760, util=88.35% 00:33:55.726 nvme0n4: ios=69/512, merge=0/0, ticks=1202/281, in_queue=1483, util=96.46% 00:33:55.726 12:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:33:55.726 [global] 00:33:55.726 thread=1 00:33:55.726 invalidate=1 00:33:55.726 rw=randwrite 00:33:55.726 time_based=1 00:33:55.726 runtime=1 00:33:55.726 ioengine=libaio 00:33:55.726 direct=1 00:33:55.726 bs=4096 00:33:55.726 iodepth=1 00:33:55.726 norandommap=0 00:33:55.726 numjobs=1 00:33:55.726 00:33:55.726 verify_dump=1 00:33:55.726 verify_backlog=512 00:33:55.726 verify_state_save=0 00:33:55.726 do_verify=1 00:33:55.726 verify=crc32c-intel 00:33:55.726 [job0] 00:33:55.726 filename=/dev/nvme0n1 00:33:55.726 [job1] 00:33:55.726 filename=/dev/nvme0n2 00:33:55.726 [job2] 00:33:55.726 filename=/dev/nvme0n3 00:33:55.726 [job3] 00:33:55.726 filename=/dev/nvme0n4 00:33:55.726 Could not set queue depth (nvme0n1) 00:33:55.726 Could not set queue depth (nvme0n2) 00:33:55.726 Could not set queue depth (nvme0n3) 00:33:55.726 Could not set queue depth (nvme0n4) 00:33:55.987 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:55.987 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:55.987 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:55.987 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:55.987 fio-3.35 00:33:55.987 Starting 4 threads 00:33:57.375 00:33:57.375 job0: (groupid=0, jobs=1): err= 0: pid=1911780: Mon Nov 4 12:38:31 2024 00:33:57.375 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:57.375 slat (nsec): min=7608, max=46212, avg=25853.82, stdev=3301.28 00:33:57.375 clat (usec): min=747, max=1310, avg=1104.17, stdev=94.80 00:33:57.375 lat (usec): min=773, max=1336, avg=1130.02, stdev=94.80 00:33:57.375 clat percentiles (usec): 00:33:57.375 | 1.00th=[ 824], 5.00th=[ 898], 10.00th=[ 971], 20.00th=[ 1045], 00:33:57.375 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:33:57.375 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1237], 00:33:57.375 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[ 1303], 99.95th=[ 1303], 00:33:57.375 | 99.99th=[ 1303] 00:33:57.375 write: IOPS=595, BW=2382KiB/s (2439kB/s)(2384KiB/1001msec); 0 zone resets 00:33:57.375 slat (nsec): min=9624, max=83329, avg=29171.78, stdev=9972.44 00:33:57.375 clat (usec): min=232, max=1303, avg=663.25, stdev=135.06 00:33:57.375 lat (usec): min=242, max=1337, avg=692.43, stdev=139.90 00:33:57.375 clat percentiles (usec): 00:33:57.375 | 1.00th=[ 355], 5.00th=[ 433], 10.00th=[ 486], 20.00th=[ 545], 00:33:57.375 | 30.00th=[ 611], 40.00th=[ 644], 50.00th=[ 668], 60.00th=[ 701], 00:33:57.375 | 70.00th=[ 734], 80.00th=[ 766], 90.00th=[ 807], 95.00th=[ 881], 00:33:57.375 | 99.00th=[ 971], 99.50th=[ 1020], 99.90th=[ 1303], 99.95th=[ 1303], 00:33:57.375 | 99.99th=[ 1303] 00:33:57.375 bw ( KiB/s): min= 4096, max= 4096, per=31.15%, avg=4096.00, stdev= 0.00, samples=1 00:33:57.375 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:57.375 lat (usec) : 250=0.18%, 500=6.50%, 750=34.03%, 1000=19.31% 00:33:57.375 lat (msec) : 2=39.98% 00:33:57.375 cpu : usr=1.40%, sys=3.40%, ctx=1110, majf=0, minf=1 00:33:57.375 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.375 issued rwts: total=512,596,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.375 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:57.375 job1: (groupid=0, jobs=1): err= 0: pid=1911783: Mon Nov 4 12:38:31 2024 00:33:57.375 read: IOPS=587, BW=2350KiB/s (2406kB/s)(2352KiB/1001msec) 00:33:57.375 slat (nsec): min=5997, max=63224, avg=26682.10, stdev=7214.21 00:33:57.375 clat (usec): min=244, max=1027, avg=729.27, stdev=140.55 00:33:57.375 lat (usec): min=256, max=1055, avg=755.95, stdev=140.57 00:33:57.375 clat percentiles (usec): 00:33:57.375 | 1.00th=[ 326], 5.00th=[ 453], 10.00th=[ 529], 20.00th=[ 619], 00:33:57.375 | 30.00th=[ 676], 40.00th=[ 725], 50.00th=[ 758], 60.00th=[ 791], 00:33:57.375 | 70.00th=[ 816], 80.00th=[ 840], 90.00th=[ 881], 95.00th=[ 914], 00:33:57.375 | 99.00th=[ 979], 99.50th=[ 1004], 99.90th=[ 1029], 99.95th=[ 1029], 00:33:57.375 | 99.99th=[ 1029] 00:33:57.375 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:33:57.375 slat (nsec): min=8621, max=70798, avg=33306.06, stdev=8494.02 00:33:57.375 clat (usec): min=140, max=911, avg=495.97, stdev=135.07 00:33:57.375 lat (usec): min=150, max=947, avg=529.27, stdev=136.71 00:33:57.375 clat percentiles (usec): 00:33:57.375 | 1.00th=[ 182], 5.00th=[ 273], 10.00th=[ 302], 20.00th=[ 379], 00:33:57.375 | 30.00th=[ 424], 40.00th=[ 465], 50.00th=[ 502], 60.00th=[ 537], 00:33:57.375 | 70.00th=[ 578], 80.00th=[ 619], 90.00th=[ 668], 95.00th=[ 701], 00:33:57.375 | 99.00th=[ 783], 99.50th=[ 807], 99.90th=[ 906], 99.95th=[ 914], 00:33:57.375 | 99.99th=[ 914] 00:33:57.375 bw ( KiB/s): min= 4096, max= 4096, per=31.15%, avg=4096.00, stdev= 0.00, samples=1 00:33:57.375 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:57.375 lat (usec) : 250=1.43%, 500=32.88%, 750=45.22%, 1000=20.29% 00:33:57.375 lat (msec) : 2=0.19% 00:33:57.375 cpu : usr=3.10%, sys=6.90%, ctx=1614, majf=0, minf=1 00:33:57.375 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.375 issued rwts: total=588,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.375 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:57.375 job2: (groupid=0, jobs=1): err= 0: pid=1911785: Mon Nov 4 12:38:31 2024 00:33:57.375 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:57.375 slat (nsec): min=8233, max=46592, avg=27310.57, stdev=3370.54 00:33:57.375 clat (usec): min=517, max=1312, avg=863.10, stdev=168.57 00:33:57.375 lat (usec): min=544, max=1338, avg=890.41, stdev=168.37 00:33:57.375 clat percentiles (usec): 00:33:57.375 | 1.00th=[ 562], 5.00th=[ 594], 10.00th=[ 619], 20.00th=[ 693], 00:33:57.375 | 30.00th=[ 750], 40.00th=[ 824], 50.00th=[ 881], 60.00th=[ 938], 00:33:57.375 | 70.00th=[ 988], 80.00th=[ 1029], 90.00th=[ 1074], 95.00th=[ 1106], 00:33:57.376 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[ 1319], 99.95th=[ 1319], 00:33:57.376 | 99.99th=[ 1319] 00:33:57.376 write: IOPS=928, BW=3712KiB/s (3801kB/s)(3716KiB/1001msec); 0 zone resets 00:33:57.376 slat (nsec): min=10234, max=55609, avg=32369.60, stdev=7279.25 00:33:57.376 clat (usec): min=142, max=1019, avg=539.80, stdev=135.65 00:33:57.376 lat (usec): min=154, max=1052, avg=572.17, stdev=137.37 00:33:57.376 clat percentiles (usec): 00:33:57.376 | 1.00th=[ 265], 5.00th=[ 314], 10.00th=[ 375], 20.00th=[ 420], 00:33:57.376 | 30.00th=[ 469], 40.00th=[ 502], 50.00th=[ 529], 60.00th=[ 562], 00:33:57.376 | 70.00th=[ 603], 80.00th=[ 652], 90.00th=[ 725], 95.00th=[ 775], 00:33:57.376 | 99.00th=[ 881], 99.50th=[ 914], 99.90th=[ 1020], 99.95th=[ 1020], 00:33:57.376 | 99.99th=[ 1020] 00:33:57.376 bw ( KiB/s): min= 4096, max= 4096, per=31.15%, avg=4096.00, stdev= 0.00, samples=1 00:33:57.376 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:57.376 lat (usec) : 250=0.42%, 500=24.98%, 750=44.69%, 1000=20.54% 00:33:57.376 lat (msec) : 2=9.37% 00:33:57.376 cpu : usr=2.60%, sys=4.20%, ctx=1442, majf=0, minf=1 00:33:57.376 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.376 issued rwts: total=512,929,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.376 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:57.376 job3: (groupid=0, jobs=1): err= 0: pid=1911790: Mon Nov 4 12:38:31 2024 00:33:57.376 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:57.376 slat (nsec): min=8625, max=45045, avg=26461.86, stdev=3334.14 00:33:57.376 clat (usec): min=588, max=1509, avg=1050.48, stdev=145.80 00:33:57.376 lat (usec): min=616, max=1536, avg=1076.94, stdev=146.11 00:33:57.376 clat percentiles (usec): 00:33:57.376 | 1.00th=[ 693], 5.00th=[ 799], 10.00th=[ 857], 20.00th=[ 930], 00:33:57.376 | 30.00th=[ 979], 40.00th=[ 1020], 50.00th=[ 1057], 60.00th=[ 1090], 00:33:57.376 | 70.00th=[ 1123], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1270], 00:33:57.376 | 99.00th=[ 1385], 99.50th=[ 1418], 99.90th=[ 1516], 99.95th=[ 1516], 00:33:57.376 | 99.99th=[ 1516] 00:33:57.376 write: IOPS=741, BW=2965KiB/s (3036kB/s)(2968KiB/1001msec); 0 zone resets 00:33:57.376 slat (nsec): min=9756, max=57716, avg=31220.87, stdev=8163.04 00:33:57.376 clat (usec): min=175, max=1018, avg=559.04, stdev=161.02 00:33:57.376 lat (usec): min=186, max=1052, avg=590.26, stdev=163.04 00:33:57.376 clat percentiles (usec): 00:33:57.376 | 1.00th=[ 255], 5.00th=[ 302], 10.00th=[ 351], 20.00th=[ 412], 00:33:57.376 | 30.00th=[ 465], 40.00th=[ 506], 50.00th=[ 553], 60.00th=[ 603], 00:33:57.376 | 70.00th=[ 644], 80.00th=[ 701], 90.00th=[ 775], 95.00th=[ 824], 00:33:57.376 | 99.00th=[ 938], 99.50th=[ 1004], 99.90th=[ 1020], 99.95th=[ 1020], 00:33:57.376 | 99.99th=[ 1020] 00:33:57.376 bw ( KiB/s): min= 4096, max= 4096, per=31.15%, avg=4096.00, stdev= 0.00, samples=1 00:33:57.376 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:57.376 lat (usec) : 250=0.48%, 500=22.17%, 750=29.98%, 1000=20.26% 00:33:57.376 lat (msec) : 2=27.11% 00:33:57.376 cpu : usr=2.30%, sys=3.40%, ctx=1257, majf=0, minf=1 00:33:57.376 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.376 issued rwts: total=512,742,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.376 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:57.376 00:33:57.376 Run status group 0 (all jobs): 00:33:57.376 READ: bw=8488KiB/s (8691kB/s), 2046KiB/s-2350KiB/s (2095kB/s-2406kB/s), io=8496KiB (8700kB), run=1001-1001msec 00:33:57.376 WRITE: bw=12.8MiB/s (13.5MB/s), 2382KiB/s-4092KiB/s (2439kB/s-4190kB/s), io=12.9MiB (13.5MB), run=1001-1001msec 00:33:57.376 00:33:57.376 Disk stats (read/write): 00:33:57.376 nvme0n1: ios=468/512, merge=0/0, ticks=595/331, in_queue=926, util=84.77% 00:33:57.376 nvme0n2: ios=554/824, merge=0/0, ticks=407/319, in_queue=726, util=91.23% 00:33:57.376 nvme0n3: ios=566/632, merge=0/0, ticks=548/330, in_queue=878, util=95.35% 00:33:57.376 nvme0n4: ios=551/512, merge=0/0, ticks=620/271, in_queue=891, util=97.33% 00:33:57.376 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:33:57.376 [global] 00:33:57.376 thread=1 00:33:57.376 invalidate=1 00:33:57.376 rw=write 00:33:57.376 time_based=1 00:33:57.376 runtime=1 00:33:57.376 ioengine=libaio 00:33:57.376 direct=1 00:33:57.376 bs=4096 00:33:57.376 iodepth=128 00:33:57.376 norandommap=0 00:33:57.376 numjobs=1 00:33:57.376 00:33:57.376 verify_dump=1 00:33:57.376 verify_backlog=512 00:33:57.376 verify_state_save=0 00:33:57.376 do_verify=1 00:33:57.376 verify=crc32c-intel 00:33:57.376 [job0] 00:33:57.376 filename=/dev/nvme0n1 00:33:57.376 [job1] 00:33:57.376 filename=/dev/nvme0n2 00:33:57.376 [job2] 00:33:57.376 filename=/dev/nvme0n3 00:33:57.376 [job3] 00:33:57.376 filename=/dev/nvme0n4 00:33:57.376 Could not set queue depth (nvme0n1) 00:33:57.376 Could not set queue depth (nvme0n2) 00:33:57.376 Could not set queue depth (nvme0n3) 00:33:57.376 Could not set queue depth (nvme0n4) 00:33:57.637 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:57.637 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:57.637 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:57.637 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:57.637 fio-3.35 00:33:57.637 Starting 4 threads 00:33:59.022 00:33:59.022 job0: (groupid=0, jobs=1): err= 0: pid=1912310: Mon Nov 4 12:38:33 2024 00:33:59.022 read: IOPS=6540, BW=25.5MiB/s (26.8MB/s)(25.7MiB/1007msec) 00:33:59.022 slat (nsec): min=946, max=12169k, avg=71928.44, stdev=512254.64 00:33:59.022 clat (usec): min=2696, max=27727, avg=9852.75, stdev=3965.56 00:33:59.022 lat (usec): min=3211, max=27733, avg=9924.68, stdev=3989.21 00:33:59.022 clat percentiles (usec): 00:33:59.022 | 1.00th=[ 5014], 5.00th=[ 6325], 10.00th=[ 6849], 20.00th=[ 7373], 00:33:59.022 | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8455], 00:33:59.022 | 70.00th=[ 9634], 80.00th=[13304], 90.00th=[16057], 95.00th=[18482], 00:33:59.022 | 99.00th=[22676], 99.50th=[23725], 99.90th=[27132], 99.95th=[27132], 00:33:59.022 | 99.99th=[27657] 00:33:59.022 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:33:59.022 slat (nsec): min=1611, max=11170k, avg=74806.85, stdev=480863.16 00:33:59.022 clat (usec): min=1029, max=31561, avg=9343.05, stdev=4333.57 00:33:59.022 lat (usec): min=1037, max=31570, avg=9417.86, stdev=4375.87 00:33:59.022 clat percentiles (usec): 00:33:59.022 | 1.00th=[ 3720], 5.00th=[ 5800], 10.00th=[ 6652], 20.00th=[ 6980], 00:33:59.022 | 30.00th=[ 7242], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7963], 00:33:59.022 | 70.00th=[ 9110], 80.00th=[10814], 90.00th=[15008], 95.00th=[19006], 00:33:59.022 | 99.00th=[27919], 99.50th=[30016], 99.90th=[31065], 99.95th=[31589], 00:33:59.022 | 99.99th=[31589] 00:33:59.022 bw ( KiB/s): min=24576, max=28672, per=29.49%, avg=26624.00, stdev=2896.31, samples=2 00:33:59.022 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:33:59.022 lat (msec) : 2=0.11%, 4=0.88%, 10=72.71%, 20=22.83%, 50=3.47% 00:33:59.022 cpu : usr=4.47%, sys=3.98%, ctx=677, majf=0, minf=1 00:33:59.022 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:33:59.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:59.022 issued rwts: total=6586,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.022 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:59.022 job1: (groupid=0, jobs=1): err= 0: pid=1912311: Mon Nov 4 12:38:33 2024 00:33:59.022 read: IOPS=5085, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:33:59.022 slat (nsec): min=888, max=10277k, avg=86144.00, stdev=498713.40 00:33:59.022 clat (usec): min=807, max=40969, avg=10681.75, stdev=4060.89 00:33:59.022 lat (usec): min=5006, max=40995, avg=10767.90, stdev=4099.50 00:33:59.022 clat percentiles (usec): 00:33:59.022 | 1.00th=[ 6390], 5.00th=[ 7439], 10.00th=[ 7963], 20.00th=[ 8848], 00:33:59.022 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:33:59.022 | 70.00th=[10814], 80.00th=[11600], 90.00th=[12518], 95.00th=[16581], 00:33:59.022 | 99.00th=[32375], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:33:59.022 | 99.99th=[41157] 00:33:59.022 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:33:59.022 slat (nsec): min=1525, max=20663k, avg=96763.14, stdev=669558.75 00:33:59.022 clat (usec): min=4557, max=53181, avg=12828.57, stdev=6856.67 00:33:59.022 lat (usec): min=4566, max=53212, avg=12925.34, stdev=6917.47 00:33:59.022 clat percentiles (usec): 00:33:59.022 | 1.00th=[ 5604], 5.00th=[ 7439], 10.00th=[ 7963], 20.00th=[ 9110], 00:33:59.022 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10290], 00:33:59.022 | 70.00th=[11994], 80.00th=[16712], 90.00th=[22152], 95.00th=[30278], 00:33:59.022 | 99.00th=[39584], 99.50th=[39584], 99.90th=[39584], 99.95th=[44827], 00:33:59.022 | 99.99th=[53216] 00:33:59.022 bw ( KiB/s): min=20480, max=23640, per=24.43%, avg=22060.00, stdev=2234.46, samples=2 00:33:59.022 iops : min= 5120, max= 5910, avg=5515.00, stdev=558.61, samples=2 00:33:59.022 lat (usec) : 1000=0.01% 00:33:59.022 lat (msec) : 10=58.24%, 20=33.67%, 50=8.07%, 100=0.01% 00:33:59.022 cpu : usr=1.69%, sys=4.07%, ctx=727, majf=0, minf=3 00:33:59.022 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:33:59.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:59.022 issued rwts: total=5131,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.022 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:59.022 job2: (groupid=0, jobs=1): err= 0: pid=1912312: Mon Nov 4 12:38:33 2024 00:33:59.022 read: IOPS=5973, BW=23.3MiB/s (24.5MB/s)(23.5MiB/1009msec) 00:33:59.022 slat (nsec): min=927, max=14401k, avg=83580.28, stdev=667443.91 00:33:59.022 clat (usec): min=3059, max=33459, avg=11476.68, stdev=4533.29 00:33:59.022 lat (usec): min=3067, max=33463, avg=11560.26, stdev=4572.98 00:33:59.022 clat percentiles (usec): 00:33:59.022 | 1.00th=[ 5342], 5.00th=[ 6718], 10.00th=[ 7439], 20.00th=[ 8094], 00:33:59.022 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[10159], 60.00th=[11076], 00:33:59.022 | 70.00th=[12387], 80.00th=[15270], 90.00th=[18220], 95.00th=[20055], 00:33:59.022 | 99.00th=[27657], 99.50th=[27657], 99.90th=[27919], 99.95th=[31327], 00:33:59.022 | 99.99th=[33424] 00:33:59.022 write: IOPS=6089, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1009msec); 0 zone resets 00:33:59.022 slat (nsec): min=1589, max=11917k, avg=71723.53, stdev=573442.10 00:33:59.022 clat (usec): min=1206, max=25918, avg=9573.75, stdev=3506.26 00:33:59.022 lat (usec): min=1217, max=25949, avg=9645.47, stdev=3532.37 00:33:59.022 clat percentiles (usec): 00:33:59.022 | 1.00th=[ 3163], 5.00th=[ 5669], 10.00th=[ 6128], 20.00th=[ 6783], 00:33:59.022 | 30.00th=[ 7570], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9634], 00:33:59.022 | 70.00th=[10814], 80.00th=[11731], 90.00th=[14091], 95.00th=[16450], 00:33:59.022 | 99.00th=[20317], 99.50th=[25297], 99.90th=[25297], 99.95th=[25297], 00:33:59.022 | 99.99th=[25822] 00:33:59.022 bw ( KiB/s): min=20480, max=28672, per=27.22%, avg=24576.00, stdev=5792.62, samples=2 00:33:59.022 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:33:59.022 lat (msec) : 2=0.06%, 4=0.81%, 10=55.54%, 20=39.73%, 50=3.85% 00:33:59.022 cpu : usr=4.46%, sys=5.56%, ctx=357, majf=0, minf=1 00:33:59.022 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:59.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:59.022 issued rwts: total=6027,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.022 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:59.022 job3: (groupid=0, jobs=1): err= 0: pid=1912313: Mon Nov 4 12:38:33 2024 00:33:59.022 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:33:59.022 slat (nsec): min=969, max=15947k, avg=107166.72, stdev=804813.35 00:33:59.022 clat (usec): min=1284, max=88141, avg=12736.19, stdev=9333.43 00:33:59.022 lat (usec): min=1302, max=88148, avg=12843.36, stdev=9429.93 00:33:59.022 clat percentiles (usec): 00:33:59.022 | 1.00th=[ 2212], 5.00th=[ 5735], 10.00th=[ 6259], 20.00th=[ 6915], 00:33:59.022 | 30.00th=[ 7635], 40.00th=[ 9110], 50.00th=[10290], 60.00th=[12125], 00:33:59.022 | 70.00th=[14091], 80.00th=[15664], 90.00th=[21103], 95.00th=[26084], 00:33:59.022 | 99.00th=[55313], 99.50th=[76022], 99.90th=[88605], 99.95th=[88605], 00:33:59.022 | 99.99th=[88605] 00:33:59.022 write: IOPS=4332, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1003msec); 0 zone resets 00:33:59.022 slat (nsec): min=1666, max=10706k, avg=122141.68, stdev=684113.17 00:33:59.022 clat (usec): min=1187, max=88136, avg=17276.64, stdev=19766.41 00:33:59.022 lat (usec): min=1198, max=88147, avg=17398.78, stdev=19898.88 00:33:59.022 clat percentiles (usec): 00:33:59.022 | 1.00th=[ 4359], 5.00th=[ 5604], 10.00th=[ 6259], 20.00th=[ 7570], 00:33:59.022 | 30.00th=[ 7898], 40.00th=[ 8029], 50.00th=[ 8160], 60.00th=[10290], 00:33:59.023 | 70.00th=[13566], 80.00th=[17695], 90.00th=[49546], 95.00th=[73925], 00:33:59.023 | 99.00th=[81265], 99.50th=[82314], 99.90th=[82314], 99.95th=[82314], 00:33:59.023 | 99.99th=[88605] 00:33:59.023 bw ( KiB/s): min=13264, max=20480, per=18.69%, avg=16872.00, stdev=5102.48, samples=2 00:33:59.023 iops : min= 3316, max= 5120, avg=4218.00, stdev=1275.62, samples=2 00:33:59.023 lat (msec) : 2=0.36%, 4=0.83%, 10=52.46%, 20=30.86%, 50=9.60% 00:33:59.023 lat (msec) : 100=5.90% 00:33:59.023 cpu : usr=3.09%, sys=4.29%, ctx=488, majf=0, minf=2 00:33:59.023 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:33:59.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:59.023 issued rwts: total=4096,4345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:59.023 00:33:59.023 Run status group 0 (all jobs): 00:33:59.023 READ: bw=84.6MiB/s (88.7MB/s), 16.0MiB/s-25.5MiB/s (16.7MB/s-26.8MB/s), io=85.3MiB (89.5MB), run=1003-1009msec 00:33:59.023 WRITE: bw=88.2MiB/s (92.5MB/s), 16.9MiB/s-25.8MiB/s (17.7MB/s-27.1MB/s), io=89.0MiB (93.3MB), run=1003-1009msec 00:33:59.023 00:33:59.023 Disk stats (read/write): 00:33:59.023 nvme0n1: ios=5678/6110, merge=0/0, ticks=29356/28365, in_queue=57721, util=99.30% 00:33:59.023 nvme0n2: ios=4651/5057, merge=0/0, ticks=22454/28472, in_queue=50926, util=92.56% 00:33:59.023 nvme0n3: ios=4656/5093, merge=0/0, ticks=53113/48030, in_queue=101143, util=92.62% 00:33:59.023 nvme0n4: ios=2765/3072, merge=0/0, ticks=37889/64290, in_queue=102179, util=92.32% 00:33:59.023 12:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:33:59.023 [global] 00:33:59.023 thread=1 00:33:59.023 invalidate=1 00:33:59.023 rw=randwrite 00:33:59.023 time_based=1 00:33:59.023 runtime=1 00:33:59.023 ioengine=libaio 00:33:59.023 direct=1 00:33:59.023 bs=4096 00:33:59.023 iodepth=128 00:33:59.023 norandommap=0 00:33:59.023 numjobs=1 00:33:59.023 00:33:59.023 verify_dump=1 00:33:59.023 verify_backlog=512 00:33:59.023 verify_state_save=0 00:33:59.023 do_verify=1 00:33:59.023 verify=crc32c-intel 00:33:59.023 [job0] 00:33:59.023 filename=/dev/nvme0n1 00:33:59.023 [job1] 00:33:59.023 filename=/dev/nvme0n2 00:33:59.023 [job2] 00:33:59.023 filename=/dev/nvme0n3 00:33:59.023 [job3] 00:33:59.023 filename=/dev/nvme0n4 00:33:59.023 Could not set queue depth (nvme0n1) 00:33:59.023 Could not set queue depth (nvme0n2) 00:33:59.023 Could not set queue depth (nvme0n3) 00:33:59.023 Could not set queue depth (nvme0n4) 00:33:59.284 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:59.284 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:59.284 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:59.284 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:59.284 fio-3.35 00:33:59.284 Starting 4 threads 00:34:00.669 00:34:00.669 job0: (groupid=0, jobs=1): err= 0: pid=1912832: Mon Nov 4 12:38:35 2024 00:34:00.669 read: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec) 00:34:00.669 slat (usec): min=2, max=12800, avg=113.62, stdev=824.68 00:34:00.669 clat (usec): min=7199, max=61402, avg=14215.85, stdev=5880.17 00:34:00.669 lat (usec): min=7315, max=61410, avg=14329.47, stdev=5954.04 00:34:00.669 clat percentiles (usec): 00:34:00.669 | 1.00th=[ 7504], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10159], 00:34:00.669 | 30.00th=[10945], 40.00th=[12780], 50.00th=[14353], 60.00th=[15008], 00:34:00.669 | 70.00th=[15270], 80.00th=[15533], 90.00th=[17171], 95.00th=[21365], 00:34:00.669 | 99.00th=[47449], 99.50th=[55313], 99.90th=[61604], 99.95th=[61604], 00:34:00.669 | 99.99th=[61604] 00:34:00.669 write: IOPS=4349, BW=17.0MiB/s (17.8MB/s)(17.2MiB/1013msec); 0 zone resets 00:34:00.669 slat (nsec): min=1671, max=11591k, avg=115761.55, stdev=748759.10 00:34:00.669 clat (usec): min=1155, max=61388, avg=15945.90, stdev=12048.98 00:34:00.669 lat (usec): min=1166, max=61397, avg=16061.67, stdev=12129.95 00:34:00.669 clat percentiles (usec): 00:34:00.669 | 1.00th=[ 6587], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 8979], 00:34:00.669 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[11731], 60.00th=[13829], 00:34:00.669 | 70.00th=[14353], 80.00th=[17695], 90.00th=[30016], 95.00th=[50070], 00:34:00.669 | 99.00th=[56361], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:34:00.669 | 99.99th=[61604] 00:34:00.669 bw ( KiB/s): min=11568, max=22618, per=17.92%, avg=17093.00, stdev=7813.53, samples=2 00:34:00.669 iops : min= 2892, max= 5654, avg=4273.00, stdev=1953.03, samples=2 00:34:00.669 lat (msec) : 2=0.02%, 4=0.05%, 10=25.15%, 20=63.67%, 50=8.06% 00:34:00.669 lat (msec) : 100=3.06% 00:34:00.669 cpu : usr=3.75%, sys=4.35%, ctx=241, majf=0, minf=1 00:34:00.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:34:00.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:00.669 issued rwts: total=4096,4406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.669 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:00.669 job1: (groupid=0, jobs=1): err= 0: pid=1912833: Mon Nov 4 12:38:35 2024 00:34:00.669 read: IOPS=9151, BW=35.7MiB/s (37.5MB/s)(36.0MiB/1007msec) 00:34:00.669 slat (nsec): min=952, max=5929.9k, avg=51029.86, stdev=379875.98 00:34:00.669 clat (usec): min=3231, max=14254, avg=7135.33, stdev=1677.62 00:34:00.669 lat (usec): min=3235, max=14257, avg=7186.36, stdev=1695.48 00:34:00.669 clat percentiles (usec): 00:34:00.669 | 1.00th=[ 4178], 5.00th=[ 4948], 10.00th=[ 5211], 20.00th=[ 5735], 00:34:00.669 | 30.00th=[ 6128], 40.00th=[ 6456], 50.00th=[ 6915], 60.00th=[ 7242], 00:34:00.669 | 70.00th=[ 7701], 80.00th=[ 8717], 90.00th=[ 9634], 95.00th=[10159], 00:34:00.669 | 99.00th=[11207], 99.50th=[11469], 99.90th=[12649], 99.95th=[13173], 00:34:00.669 | 99.99th=[14222] 00:34:00.669 write: IOPS=9258, BW=36.2MiB/s (37.9MB/s)(36.4MiB/1007msec); 0 zone resets 00:34:00.669 slat (nsec): min=1564, max=5596.6k, avg=51908.02, stdev=377093.21 00:34:00.669 clat (usec): min=1127, max=12589, avg=6644.58, stdev=1594.56 00:34:00.669 lat (usec): min=1137, max=12592, avg=6696.49, stdev=1600.65 00:34:00.669 clat percentiles (usec): 00:34:00.669 | 1.00th=[ 3949], 5.00th=[ 4424], 10.00th=[ 4621], 20.00th=[ 5145], 00:34:00.669 | 30.00th=[ 5669], 40.00th=[ 6128], 50.00th=[ 6456], 60.00th=[ 6783], 00:34:00.669 | 70.00th=[ 7177], 80.00th=[ 8717], 90.00th=[ 9110], 95.00th=[ 9241], 00:34:00.669 | 99.00th=[ 9503], 99.50th=[ 9896], 99.90th=[11731], 99.95th=[11994], 00:34:00.669 | 99.99th=[12649] 00:34:00.669 bw ( KiB/s): min=36864, max=36864, per=38.65%, avg=36864.00, stdev= 0.00, samples=2 00:34:00.669 iops : min= 9216, max= 9216, avg=9216.00, stdev= 0.00, samples=2 00:34:00.669 lat (msec) : 2=0.05%, 4=0.84%, 10=95.66%, 20=3.45% 00:34:00.669 cpu : usr=5.67%, sys=9.44%, ctx=466, majf=0, minf=2 00:34:00.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:34:00.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:00.669 issued rwts: total=9216,9323,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.669 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:00.669 job2: (groupid=0, jobs=1): err= 0: pid=1912834: Mon Nov 4 12:38:35 2024 00:34:00.669 read: IOPS=7753, BW=30.3MiB/s (31.8MB/s)(30.6MiB/1011msec) 00:34:00.669 slat (nsec): min=1018, max=6882.6k, avg=60474.52, stdev=442010.57 00:34:00.669 clat (usec): min=3474, max=17845, avg=8251.54, stdev=2104.18 00:34:00.669 lat (usec): min=3480, max=17847, avg=8312.02, stdev=2120.12 00:34:00.669 clat percentiles (usec): 00:34:00.669 | 1.00th=[ 4555], 5.00th=[ 5276], 10.00th=[ 5866], 20.00th=[ 6521], 00:34:00.669 | 30.00th=[ 7177], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 8094], 00:34:00.669 | 70.00th=[ 9241], 80.00th=[10028], 90.00th=[11207], 95.00th=[11994], 00:34:00.669 | 99.00th=[13829], 99.50th=[16909], 99.90th=[17695], 99.95th=[17957], 00:34:00.669 | 99.99th=[17957] 00:34:00.669 write: IOPS=8102, BW=31.7MiB/s (33.2MB/s)(32.0MiB/1011msec); 0 zone resets 00:34:00.669 slat (nsec): min=1626, max=6269.2k, avg=58991.44, stdev=425876.73 00:34:00.669 clat (usec): min=1143, max=14420, avg=7753.87, stdev=1908.95 00:34:00.669 lat (usec): min=1152, max=14423, avg=7812.86, stdev=1912.89 00:34:00.669 clat percentiles (usec): 00:34:00.669 | 1.00th=[ 4359], 5.00th=[ 5014], 10.00th=[ 5276], 20.00th=[ 5604], 00:34:00.669 | 30.00th=[ 6718], 40.00th=[ 7242], 50.00th=[ 7701], 60.00th=[ 8029], 00:34:00.669 | 70.00th=[ 8356], 80.00th=[10159], 90.00th=[10552], 95.00th=[10683], 00:34:00.669 | 99.00th=[11207], 99.50th=[11207], 99.90th=[11994], 99.95th=[14222], 00:34:00.669 | 99.99th=[14484] 00:34:00.669 bw ( KiB/s): min=32768, max=32768, per=34.36%, avg=32768.00, stdev= 0.00, samples=2 00:34:00.669 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:34:00.669 lat (msec) : 2=0.07%, 4=0.39%, 10=78.72%, 20=20.82% 00:34:00.669 cpu : usr=5.45%, sys=8.32%, ctx=471, majf=0, minf=2 00:34:00.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:00.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:00.669 issued rwts: total=7839,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.669 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:00.669 job3: (groupid=0, jobs=1): err= 0: pid=1912836: Mon Nov 4 12:38:35 2024 00:34:00.669 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:34:00.669 slat (nsec): min=1434, max=22405k, avg=192231.75, stdev=1227017.71 00:34:00.669 clat (usec): min=10000, max=66954, avg=22441.27, stdev=9788.76 00:34:00.669 lat (usec): min=10009, max=66958, avg=22633.50, stdev=9925.81 00:34:00.669 clat percentiles (usec): 00:34:00.669 | 1.00th=[10945], 5.00th=[13304], 10.00th=[13698], 20.00th=[13960], 00:34:00.669 | 30.00th=[14484], 40.00th=[15795], 50.00th=[22152], 60.00th=[24511], 00:34:00.669 | 70.00th=[29754], 80.00th=[30540], 90.00th=[32900], 95.00th=[37487], 00:34:00.669 | 99.00th=[59507], 99.50th=[62653], 99.90th=[66847], 99.95th=[66847], 00:34:00.669 | 99.99th=[66847] 00:34:00.669 write: IOPS=2211, BW=8844KiB/s (9057kB/s)(8924KiB/1009msec); 0 zone resets 00:34:00.669 slat (nsec): min=1628, max=22348k, avg=265896.69, stdev=1421531.91 00:34:00.669 clat (msec): min=7, max=101, avg=35.93, stdev=31.26 00:34:00.669 lat (msec): min=7, max=101, avg=36.20, stdev=31.50 00:34:00.669 clat percentiles (msec): 00:34:00.669 | 1.00th=[ 10], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:34:00.669 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 19], 60.00th=[ 30], 00:34:00.669 | 70.00th=[ 39], 80.00th=[ 79], 90.00th=[ 94], 95.00th=[ 96], 00:34:00.669 | 99.00th=[ 101], 99.50th=[ 102], 99.90th=[ 102], 99.95th=[ 102], 00:34:00.669 | 99.99th=[ 102] 00:34:00.669 bw ( KiB/s): min= 4096, max=12736, per=8.82%, avg=8416.00, stdev=6109.40, samples=2 00:34:00.669 iops : min= 1024, max= 3184, avg=2104.00, stdev=1527.35, samples=2 00:34:00.669 lat (msec) : 10=0.63%, 20=49.99%, 50=36.27%, 100=12.29%, 250=0.82% 00:34:00.669 cpu : usr=1.98%, sys=2.68%, ctx=175, majf=0, minf=1 00:34:00.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:34:00.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:00.669 issued rwts: total=2048,2231,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.669 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:00.669 00:34:00.669 Run status group 0 (all jobs): 00:34:00.669 READ: bw=89.5MiB/s (93.8MB/s), 8119KiB/s-35.7MiB/s (8314kB/s-37.5MB/s), io=90.6MiB (95.0MB), run=1007-1013msec 00:34:00.669 WRITE: bw=93.1MiB/s (97.7MB/s), 8844KiB/s-36.2MiB/s (9057kB/s-37.9MB/s), io=94.3MiB (98.9MB), run=1007-1013msec 00:34:00.669 00:34:00.669 Disk stats (read/write): 00:34:00.670 nvme0n1: ios=3634/3959, merge=0/0, ticks=45787/54193, in_queue=99980, util=88.28% 00:34:00.670 nvme0n2: ios=7717/7740, merge=0/0, ticks=51918/49216, in_queue=101134, util=87.87% 00:34:00.670 nvme0n3: ios=6681/6658, merge=0/0, ticks=51577/49564, in_queue=101141, util=93.99% 00:34:00.670 nvme0n4: ios=1520/1543, merge=0/0, ticks=18126/33544, in_queue=51670, util=89.54% 00:34:00.670 12:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:00.670 12:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1913159 00:34:00.670 12:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:00.670 12:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:00.670 [global] 00:34:00.670 thread=1 00:34:00.670 invalidate=1 00:34:00.670 rw=read 00:34:00.670 time_based=1 00:34:00.670 runtime=10 00:34:00.670 ioengine=libaio 00:34:00.670 direct=1 00:34:00.670 bs=4096 00:34:00.670 iodepth=1 00:34:00.670 norandommap=1 00:34:00.670 numjobs=1 00:34:00.670 00:34:00.670 [job0] 00:34:00.670 filename=/dev/nvme0n1 00:34:00.670 [job1] 00:34:00.670 filename=/dev/nvme0n2 00:34:00.670 [job2] 00:34:00.670 filename=/dev/nvme0n3 00:34:00.670 [job3] 00:34:00.670 filename=/dev/nvme0n4 00:34:00.670 Could not set queue depth (nvme0n1) 00:34:00.670 Could not set queue depth (nvme0n2) 00:34:00.670 Could not set queue depth (nvme0n3) 00:34:00.670 Could not set queue depth (nvme0n4) 00:34:00.930 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:00.930 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:00.930 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:00.930 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:00.930 fio-3.35 00:34:00.930 Starting 4 threads 00:34:04.234 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:04.234 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:04.234 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=1253376, buflen=4096 00:34:04.234 fio: pid=1913357, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:04.234 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=9281536, buflen=4096 00:34:04.234 fio: pid=1913356, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:04.234 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:04.234 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:04.234 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11354112, buflen=4096 00:34:04.234 fio: pid=1913354, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:04.234 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:04.234 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:04.234 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=10768384, buflen=4096 00:34:04.234 fio: pid=1913355, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:04.234 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:04.234 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:04.495 00:34:04.495 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1913354: Mon Nov 4 12:38:38 2024 00:34:04.495 read: IOPS=935, BW=3742KiB/s (3832kB/s)(10.8MiB/2963msec) 00:34:04.495 slat (usec): min=6, max=22713, avg=48.32, stdev=635.28 00:34:04.495 clat (usec): min=534, max=1291, avg=1006.25, stdev=92.24 00:34:04.495 lat (usec): min=559, max=23793, avg=1054.57, stdev=643.72 00:34:04.495 clat percentiles (usec): 00:34:04.495 | 1.00th=[ 750], 5.00th=[ 807], 10.00th=[ 873], 20.00th=[ 947], 00:34:04.495 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1037], 00:34:04.495 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1139], 00:34:04.495 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1270], 00:34:04.495 | 99.99th=[ 1287] 00:34:04.495 bw ( KiB/s): min= 3816, max= 3856, per=38.01%, avg=3840.00, stdev=14.97, samples=5 00:34:04.495 iops : min= 954, max= 964, avg=960.00, stdev= 3.74, samples=5 00:34:04.495 lat (usec) : 750=1.05%, 1000=37.00% 00:34:04.495 lat (msec) : 2=61.92% 00:34:04.495 cpu : usr=0.95%, sys=2.77%, ctx=2777, majf=0, minf=1 00:34:04.495 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.496 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.496 issued rwts: total=2773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.496 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:04.496 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1913355: Mon Nov 4 12:38:38 2024 00:34:04.496 read: IOPS=833, BW=3331KiB/s (3411kB/s)(10.3MiB/3157msec) 00:34:04.496 slat (usec): min=6, max=20514, avg=50.42, stdev=629.30 00:34:04.496 clat (usec): min=234, max=41969, avg=1134.02, stdev=1770.35 00:34:04.496 lat (usec): min=260, max=41995, avg=1184.45, stdev=1883.62 00:34:04.496 clat percentiles (usec): 00:34:04.496 | 1.00th=[ 545], 5.00th=[ 701], 10.00th=[ 816], 20.00th=[ 930], 00:34:04.496 | 30.00th=[ 996], 40.00th=[ 1037], 50.00th=[ 1074], 60.00th=[ 1106], 00:34:04.496 | 70.00th=[ 1156], 80.00th=[ 1205], 90.00th=[ 1287], 95.00th=[ 1336], 00:34:04.496 | 99.00th=[ 1418], 99.50th=[ 1450], 99.90th=[41157], 99.95th=[41681], 00:34:04.496 | 99.99th=[42206] 00:34:04.496 bw ( KiB/s): min= 2276, max= 3968, per=34.04%, avg=3439.33, stdev=589.63, samples=6 00:34:04.496 iops : min= 569, max= 992, avg=859.83, stdev=147.41, samples=6 00:34:04.496 lat (usec) : 250=0.04%, 500=0.76%, 750=6.08%, 1000=24.14% 00:34:04.496 lat (msec) : 2=68.71%, 4=0.04%, 50=0.19% 00:34:04.496 cpu : usr=1.39%, sys=3.30%, ctx=2638, majf=0, minf=2 00:34:04.496 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.496 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.496 issued rwts: total=2630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.496 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:04.496 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1913356: Mon Nov 4 12:38:38 2024 00:34:04.496 read: IOPS=814, BW=3257KiB/s (3335kB/s)(9064KiB/2783msec) 00:34:04.496 slat (usec): min=7, max=20850, avg=40.37, stdev=502.53 00:34:04.496 clat (usec): min=761, max=1557, avg=1169.52, stdev=128.63 00:34:04.496 lat (usec): min=787, max=22020, avg=1209.89, stdev=516.62 00:34:04.496 clat percentiles (usec): 00:34:04.496 | 1.00th=[ 865], 5.00th=[ 955], 10.00th=[ 1004], 20.00th=[ 1057], 00:34:04.496 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1205], 00:34:04.496 | 70.00th=[ 1237], 80.00th=[ 1287], 90.00th=[ 1336], 95.00th=[ 1385], 00:34:04.496 | 99.00th=[ 1434], 99.50th=[ 1467], 99.90th=[ 1483], 99.95th=[ 1532], 00:34:04.496 | 99.99th=[ 1565] 00:34:04.496 bw ( KiB/s): min= 3320, max= 3368, per=33.03%, avg=3337.60, stdev=19.92, samples=5 00:34:04.496 iops : min= 830, max= 842, avg=834.40, stdev= 4.98, samples=5 00:34:04.496 lat (usec) : 1000=8.87% 00:34:04.496 lat (msec) : 2=91.09% 00:34:04.496 cpu : usr=0.75%, sys=2.59%, ctx=2269, majf=0, minf=2 00:34:04.496 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.496 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.496 issued rwts: total=2267,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.496 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:04.496 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1913357: Mon Nov 4 12:38:38 2024 00:34:04.496 read: IOPS=116, BW=465KiB/s (477kB/s)(1224KiB/2630msec) 00:34:04.496 slat (nsec): min=8005, max=62306, avg=27190.21, stdev=3328.03 00:34:04.496 clat (usec): min=662, max=42229, avg=8487.12, stdev=15629.36 00:34:04.496 lat (usec): min=697, max=42254, avg=8514.32, stdev=15628.72 00:34:04.496 clat percentiles (usec): 00:34:04.496 | 1.00th=[ 775], 5.00th=[ 971], 10.00th=[ 1020], 20.00th=[ 1090], 00:34:04.496 | 30.00th=[ 1172], 40.00th=[ 1221], 50.00th=[ 1254], 60.00th=[ 1270], 00:34:04.496 | 70.00th=[ 1303], 80.00th=[ 1369], 90.00th=[41681], 95.00th=[42206], 00:34:04.496 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:04.496 | 99.99th=[42206] 00:34:04.496 bw ( KiB/s): min= 96, max= 1352, per=4.79%, avg=484.80, stdev=527.17, samples=5 00:34:04.496 iops : min= 24, max= 338, avg=121.20, stdev=131.79, samples=5 00:34:04.496 lat (usec) : 750=0.98%, 1000=6.51% 00:34:04.496 lat (msec) : 2=74.27%, 50=17.92% 00:34:04.496 cpu : usr=0.23%, sys=0.42%, ctx=307, majf=0, minf=2 00:34:04.496 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.496 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.496 issued rwts: total=307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.496 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:04.496 00:34:04.496 Run status group 0 (all jobs): 00:34:04.496 READ: bw=9.86MiB/s (10.3MB/s), 465KiB/s-3742KiB/s (477kB/s-3832kB/s), io=31.1MiB (32.7MB), run=2630-3157msec 00:34:04.496 00:34:04.496 Disk stats (read/write): 00:34:04.496 nvme0n1: ios=2672/0, merge=0/0, ticks=2645/0, in_queue=2645, util=92.72% 00:34:04.496 nvme0n2: ios=2628/0, merge=0/0, ticks=2699/0, in_queue=2699, util=93.78% 00:34:04.496 nvme0n3: ios=2155/0, merge=0/0, ticks=2456/0, in_queue=2456, util=96.03% 00:34:04.496 nvme0n4: ios=305/0, merge=0/0, ticks=2531/0, in_queue=2531, util=96.42% 00:34:04.496 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:04.496 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:04.756 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:04.756 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:05.017 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:05.017 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:05.017 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:05.017 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:05.279 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:05.279 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1913159 00:34:05.279 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:05.279 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:05.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:05.279 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:05.279 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:34:05.279 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:34:05.279 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:05.279 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:34:05.279 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:05.279 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:34:05.279 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:05.279 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:05.279 nvmf hotplug test: fio failed as expected 00:34:05.279 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:05.542 rmmod nvme_tcp 00:34:05.542 rmmod nvme_fabrics 00:34:05.542 rmmod nvme_keyring 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1909819 ']' 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1909819 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1909819 ']' 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1909819 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:05.542 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1909819 00:34:05.803 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:05.803 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:05.803 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1909819' 00:34:05.803 killing process with pid 1909819 00:34:05.803 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1909819 00:34:05.803 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1909819 00:34:05.803 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:05.803 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:05.803 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:05.803 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:05.803 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:34:05.803 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:05.803 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:34:05.803 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:05.803 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:05.803 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.803 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.803 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.351 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:08.351 00:34:08.351 real 0m27.597s 00:34:08.351 user 2m10.270s 00:34:08.351 sys 0m12.377s 00:34:08.351 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:08.351 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:08.351 ************************************ 00:34:08.351 END TEST nvmf_fio_target 00:34:08.351 ************************************ 00:34:08.351 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:08.351 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:08.351 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:08.351 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:08.351 ************************************ 00:34:08.352 START TEST nvmf_bdevio 00:34:08.352 ************************************ 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:08.352 * Looking for test storage... 00:34:08.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:08.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.352 --rc genhtml_branch_coverage=1 00:34:08.352 --rc genhtml_function_coverage=1 00:34:08.352 --rc genhtml_legend=1 00:34:08.352 --rc geninfo_all_blocks=1 00:34:08.352 --rc geninfo_unexecuted_blocks=1 00:34:08.352 00:34:08.352 ' 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:08.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.352 --rc genhtml_branch_coverage=1 00:34:08.352 --rc genhtml_function_coverage=1 00:34:08.352 --rc genhtml_legend=1 00:34:08.352 --rc geninfo_all_blocks=1 00:34:08.352 --rc geninfo_unexecuted_blocks=1 00:34:08.352 00:34:08.352 ' 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:08.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.352 --rc genhtml_branch_coverage=1 00:34:08.352 --rc genhtml_function_coverage=1 00:34:08.352 --rc genhtml_legend=1 00:34:08.352 --rc geninfo_all_blocks=1 00:34:08.352 --rc geninfo_unexecuted_blocks=1 00:34:08.352 00:34:08.352 ' 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:08.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.352 --rc genhtml_branch_coverage=1 00:34:08.352 --rc genhtml_function_coverage=1 00:34:08.352 --rc genhtml_legend=1 00:34:08.352 --rc geninfo_all_blocks=1 00:34:08.352 --rc geninfo_unexecuted_blocks=1 00:34:08.352 00:34:08.352 ' 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:08.352 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:08.353 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:16.497 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:16.497 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:16.497 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:16.497 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:16.497 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:16.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:16.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:34:16.498 00:34:16.498 --- 10.0.0.2 ping statistics --- 00:34:16.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.498 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:16.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:16.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:34:16.498 00:34:16.498 --- 10.0.0.1 ping statistics --- 00:34:16.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.498 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1918378 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1918378 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1918378 ']' 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:16.498 12:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.498 [2024-11-04 12:38:50.033874] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:16.498 [2024-11-04 12:38:50.035044] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:34:16.498 [2024-11-04 12:38:50.035098] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.498 [2024-11-04 12:38:50.141953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:16.498 [2024-11-04 12:38:50.190415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.498 [2024-11-04 12:38:50.190462] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.498 [2024-11-04 12:38:50.190471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.498 [2024-11-04 12:38:50.190479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.498 [2024-11-04 12:38:50.190485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.498 [2024-11-04 12:38:50.192321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:16.498 [2024-11-04 12:38:50.192481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:16.498 [2024-11-04 12:38:50.192626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:16.498 [2024-11-04 12:38:50.192627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:16.498 [2024-11-04 12:38:50.261081] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:16.498 [2024-11-04 12:38:50.262515] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:16.498 [2024-11-04 12:38:50.262780] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:16.498 [2024-11-04 12:38:50.263744] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:16.498 [2024-11-04 12:38:50.263792] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.498 [2024-11-04 12:38:50.877594] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.498 Malloc0 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.498 [2024-11-04 12:38:50.973950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:16.498 { 00:34:16.498 "params": { 00:34:16.498 "name": "Nvme$subsystem", 00:34:16.498 "trtype": "$TEST_TRANSPORT", 00:34:16.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.498 "adrfam": "ipv4", 00:34:16.498 "trsvcid": "$NVMF_PORT", 00:34:16.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.498 "hdgst": ${hdgst:-false}, 00:34:16.498 "ddgst": ${ddgst:-false} 00:34:16.498 }, 00:34:16.498 "method": "bdev_nvme_attach_controller" 00:34:16.498 } 00:34:16.498 EOF 00:34:16.498 )") 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:34:16.498 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:16.498 "params": { 00:34:16.498 "name": "Nvme1", 00:34:16.498 "trtype": "tcp", 00:34:16.498 "traddr": "10.0.0.2", 00:34:16.498 "adrfam": "ipv4", 00:34:16.498 "trsvcid": "4420", 00:34:16.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:16.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:16.498 "hdgst": false, 00:34:16.498 "ddgst": false 00:34:16.498 }, 00:34:16.498 "method": "bdev_nvme_attach_controller" 00:34:16.498 }' 00:34:16.498 [2024-11-04 12:38:51.031737] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:34:16.499 [2024-11-04 12:38:51.031809] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1918522 ] 00:34:16.760 [2024-11-04 12:38:51.096935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:16.760 [2024-11-04 12:38:51.142796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:16.760 [2024-11-04 12:38:51.142855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:16.760 [2024-11-04 12:38:51.143046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.760 I/O targets: 00:34:16.760 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:16.760 00:34:16.760 00:34:16.760 CUnit - A unit testing framework for C - Version 2.1-3 00:34:16.760 http://cunit.sourceforge.net/ 00:34:16.760 00:34:16.760 00:34:16.760 Suite: bdevio tests on: Nvme1n1 00:34:16.760 Test: blockdev write read block ...passed 00:34:17.021 Test: blockdev write zeroes read block ...passed 00:34:17.021 Test: blockdev write zeroes read no split ...passed 00:34:17.021 Test: blockdev write zeroes read split ...passed 00:34:17.021 Test: blockdev write zeroes read split partial ...passed 00:34:17.021 Test: blockdev reset ...[2024-11-04 12:38:51.442002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.021 [2024-11-04 12:38:51.442065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbe0d0 (9): Bad file descriptor 00:34:17.021 [2024-11-04 12:38:51.447829] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:17.021 passed 00:34:17.021 Test: blockdev write read 8 blocks ...passed 00:34:17.021 Test: blockdev write read size > 128k ...passed 00:34:17.021 Test: blockdev write read invalid size ...passed 00:34:17.021 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:17.022 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:17.022 Test: blockdev write read max offset ...passed 00:34:17.283 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:17.283 Test: blockdev writev readv 8 blocks ...passed 00:34:17.283 Test: blockdev writev readv 30 x 1block ...passed 00:34:17.283 Test: blockdev writev readv block ...passed 00:34:17.283 Test: blockdev writev readv size > 128k ...passed 00:34:17.283 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:17.283 Test: blockdev comparev and writev ...[2024-11-04 12:38:51.714405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:17.283 [2024-11-04 12:38:51.714434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:17.283 [2024-11-04 12:38:51.714445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:17.283 [2024-11-04 12:38:51.714451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:17.283 [2024-11-04 12:38:51.714968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:17.283 [2024-11-04 12:38:51.714977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:17.283 [2024-11-04 12:38:51.714987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:17.283 [2024-11-04 12:38:51.714993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:17.283 [2024-11-04 12:38:51.715558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:17.283 [2024-11-04 12:38:51.715566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:17.283 [2024-11-04 12:38:51.715576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:17.283 [2024-11-04 12:38:51.715581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:17.283 [2024-11-04 12:38:51.716131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:17.283 [2024-11-04 12:38:51.716138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:17.283 [2024-11-04 12:38:51.716148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:17.283 [2024-11-04 12:38:51.716153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:17.283 passed 00:34:17.283 Test: blockdev nvme passthru rw ...passed 00:34:17.283 Test: blockdev nvme passthru vendor specific ...[2024-11-04 12:38:51.800616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:17.283 [2024-11-04 12:38:51.800626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:17.283 [2024-11-04 12:38:51.800970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:17.283 [2024-11-04 12:38:51.800977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:17.283 [2024-11-04 12:38:51.801340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:17.283 [2024-11-04 12:38:51.801347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:17.283 [2024-11-04 12:38:51.801678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:17.283 [2024-11-04 12:38:51.801692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:17.283 passed 00:34:17.283 Test: blockdev nvme admin passthru ...passed 00:34:17.544 Test: blockdev copy ...passed 00:34:17.544 00:34:17.544 Run Summary: Type Total Ran Passed Failed Inactive 00:34:17.544 suites 1 1 n/a 0 0 00:34:17.544 tests 23 23 23 0 0 00:34:17.544 asserts 152 152 152 0 n/a 00:34:17.544 00:34:17.544 Elapsed time = 1.156 seconds 00:34:17.544 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:17.544 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.544 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:17.544 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.544 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:17.544 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:17.544 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:17.544 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:17.544 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:17.544 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:17.544 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:17.544 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:17.544 rmmod nvme_tcp 00:34:17.544 rmmod nvme_fabrics 00:34:17.544 rmmod nvme_keyring 00:34:17.544 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:17.544 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:17.544 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:17.544 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1918378 ']' 00:34:17.544 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1918378 00:34:17.544 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1918378 ']' 00:34:17.544 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1918378 00:34:17.544 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:34:17.544 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:17.544 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1918378 00:34:17.544 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:34:17.544 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:34:17.544 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1918378' 00:34:17.545 killing process with pid 1918378 00:34:17.545 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1918378 00:34:17.545 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1918378 00:34:17.806 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:17.806 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:17.806 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:17.806 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:17.806 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:34:17.806 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:17.806 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:34:17.806 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:17.806 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:17.806 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.806 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:17.806 12:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.355 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:20.355 00:34:20.355 real 0m11.912s 00:34:20.355 user 0m8.781s 00:34:20.355 sys 0m6.243s 00:34:20.355 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:20.355 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:20.355 ************************************ 00:34:20.355 END TEST nvmf_bdevio 00:34:20.355 ************************************ 00:34:20.355 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:20.355 00:34:20.355 real 4m56.367s 00:34:20.355 user 10m5.430s 00:34:20.355 sys 2m3.781s 00:34:20.355 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:20.355 12:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:20.355 ************************************ 00:34:20.355 END TEST nvmf_target_core_interrupt_mode 00:34:20.355 ************************************ 00:34:20.355 12:38:54 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:20.355 12:38:54 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:20.355 12:38:54 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:20.355 12:38:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:20.355 ************************************ 00:34:20.355 START TEST nvmf_interrupt 00:34:20.355 ************************************ 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:20.355 * Looking for test storage... 00:34:20.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:20.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.355 --rc genhtml_branch_coverage=1 00:34:20.355 --rc genhtml_function_coverage=1 00:34:20.355 --rc genhtml_legend=1 00:34:20.355 --rc geninfo_all_blocks=1 00:34:20.355 --rc geninfo_unexecuted_blocks=1 00:34:20.355 00:34:20.355 ' 00:34:20.355 12:38:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:20.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.356 --rc genhtml_branch_coverage=1 00:34:20.356 --rc genhtml_function_coverage=1 00:34:20.356 --rc genhtml_legend=1 00:34:20.356 --rc geninfo_all_blocks=1 00:34:20.356 --rc geninfo_unexecuted_blocks=1 00:34:20.356 00:34:20.356 ' 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:20.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.356 --rc genhtml_branch_coverage=1 00:34:20.356 --rc genhtml_function_coverage=1 00:34:20.356 --rc genhtml_legend=1 00:34:20.356 --rc geninfo_all_blocks=1 00:34:20.356 --rc geninfo_unexecuted_blocks=1 00:34:20.356 00:34:20.356 ' 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:20.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.356 --rc genhtml_branch_coverage=1 00:34:20.356 --rc genhtml_function_coverage=1 00:34:20.356 --rc genhtml_legend=1 00:34:20.356 --rc geninfo_all_blocks=1 00:34:20.356 --rc geninfo_unexecuted_blocks=1 00:34:20.356 00:34:20.356 ' 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:20.356 12:38:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:28.499 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:28.500 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:28.500 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:28.500 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:28.500 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:28.500 12:39:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:28.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:28.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:34:28.500 00:34:28.500 --- 10.0.0.2 ping statistics --- 00:34:28.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.500 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:28.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:28.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:34:28.500 00:34:28.500 --- 10.0.0.1 ping statistics --- 00:34:28.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.500 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=1923023 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 1923023 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 1923023 ']' 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:28.500 12:39:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.500 [2024-11-04 12:39:02.161632] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:28.500 [2024-11-04 12:39:02.162766] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:34:28.500 [2024-11-04 12:39:02.162823] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:28.500 [2024-11-04 12:39:02.238360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:28.500 [2024-11-04 12:39:02.281022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:28.500 [2024-11-04 12:39:02.281065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:28.500 [2024-11-04 12:39:02.281073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:28.500 [2024-11-04 12:39:02.281080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:28.500 [2024-11-04 12:39:02.281086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:28.500 [2024-11-04 12:39:02.283361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:28.500 [2024-11-04 12:39:02.283440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.500 [2024-11-04 12:39:02.339627] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:28.500 [2024-11-04 12:39:02.340223] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:28.500 [2024-11-04 12:39:02.340538] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:28.501 12:39:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:28.501 12:39:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:34:28.501 12:39:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:28.501 12:39:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:28.501 12:39:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.501 12:39:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:28.501 12:39:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:28.501 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:28.501 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:28.501 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:28.501 5000+0 records in 00:34:28.501 5000+0 records out 00:34:28.501 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0179578 s, 570 MB/s 00:34:28.501 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:28.501 12:39:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.501 12:39:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.501 AIO0 00:34:28.501 12:39:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.501 12:39:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:28.501 12:39:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.501 12:39:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.761 [2024-11-04 12:39:03.071979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:28.761 [2024-11-04 12:39:03.112459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1923023 0 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1923023 0 idle 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1923023 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1923023 -w 256 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1923023 root 20 0 128.2g 43776 32256 S 6.7 0.0 0:00.25 reactor_0' 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1923023 root 20 0 128.2g 43776 32256 S 6.7 0.0 0:00.25 reactor_0 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1923023 1 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1923023 1 idle 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1923023 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1923023 -w 256 00:34:28.761 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1923061 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1923061 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1923236 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1923023 0 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1923023 0 busy 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1923023 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1923023 -w 256 00:34:29.022 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1923023 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.47 reactor_0' 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1923023 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.47 reactor_0 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1923023 1 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1923023 1 busy 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1923023 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1923023 -w 256 00:34:29.283 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:29.544 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1923061 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:00.30 reactor_1' 00:34:29.545 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1923061 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:00.30 reactor_1 00:34:29.545 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:29.545 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:29.545 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:34:29.545 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:34:29.545 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:29.545 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:29.545 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:29.545 12:39:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:29.545 12:39:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1923236 00:34:39.544 Initializing NVMe Controllers 00:34:39.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:39.545 Controller IO queue size 256, less than required. 00:34:39.545 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:39.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:39.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:39.545 Initialization complete. Launching workers. 00:34:39.545 ======================================================== 00:34:39.545 Latency(us) 00:34:39.545 Device Information : IOPS MiB/s Average min max 00:34:39.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16593.79 64.82 15436.47 3187.99 18398.38 00:34:39.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19371.29 75.67 13216.20 8818.08 28782.99 00:34:39.545 ======================================================== 00:34:39.545 Total : 35965.08 140.49 14240.60 3187.99 28782.99 00:34:39.545 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1923023 0 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1923023 0 idle 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1923023 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1923023 -w 256 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1923023 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.25 reactor_0' 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1923023 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.25 reactor_0 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1923023 1 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1923023 1 idle 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1923023 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1923023 -w 256 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1923061 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1923061 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:39.545 12:39:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:40.116 12:39:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:40.116 12:39:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:34:40.116 12:39:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:34:40.116 12:39:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:34:40.116 12:39:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1923023 0 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1923023 0 idle 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1923023 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1923023 -w 256 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1923023 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.48 reactor_0' 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1923023 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.48 reactor_0 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:42.140 12:39:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:42.141 12:39:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1923023 1 00:34:42.141 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1923023 1 idle 00:34:42.141 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1923023 00:34:42.141 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:42.141 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:42.141 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:42.141 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:42.141 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:42.141 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:42.141 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:42.141 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:42.141 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:42.141 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1923023 -w 256 00:34:42.141 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:42.401 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1923061 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.13 reactor_1' 00:34:42.401 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1923061 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.13 reactor_1 00:34:42.401 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:42.401 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:42.401 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:42.401 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:42.401 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:42.401 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:42.401 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:42.401 12:39:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:42.401 12:39:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:42.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:42.662 rmmod nvme_tcp 00:34:42.662 rmmod nvme_fabrics 00:34:42.662 rmmod nvme_keyring 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 1923023 ']' 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 1923023 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 1923023 ']' 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 1923023 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:42.662 12:39:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1923023 00:34:42.923 12:39:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:42.923 12:39:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:42.923 12:39:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1923023' 00:34:42.923 killing process with pid 1923023 00:34:42.923 12:39:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 1923023 00:34:42.923 12:39:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 1923023 00:34:42.923 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:42.923 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:42.923 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:42.923 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:42.923 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:42.923 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:34:42.923 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:34:42.923 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:42.923 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:42.923 12:39:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.923 12:39:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:42.923 12:39:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.471 12:39:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:45.471 00:34:45.471 real 0m25.036s 00:34:45.471 user 0m40.174s 00:34:45.471 sys 0m9.393s 00:34:45.471 12:39:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:45.471 12:39:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:45.471 ************************************ 00:34:45.472 END TEST nvmf_interrupt 00:34:45.472 ************************************ 00:34:45.472 00:34:45.472 real 29m25.202s 00:34:45.472 user 60m41.677s 00:34:45.472 sys 9m54.472s 00:34:45.472 12:39:19 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:45.472 12:39:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.472 ************************************ 00:34:45.472 END TEST nvmf_tcp 00:34:45.472 ************************************ 00:34:45.472 12:39:19 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:34:45.472 12:39:19 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:45.472 12:39:19 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:45.472 12:39:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:45.472 12:39:19 -- common/autotest_common.sh@10 -- # set +x 00:34:45.472 ************************************ 00:34:45.472 START TEST spdkcli_nvmf_tcp 00:34:45.472 ************************************ 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:45.472 * Looking for test storage... 00:34:45.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:45.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.472 --rc genhtml_branch_coverage=1 00:34:45.472 --rc genhtml_function_coverage=1 00:34:45.472 --rc genhtml_legend=1 00:34:45.472 --rc geninfo_all_blocks=1 00:34:45.472 --rc geninfo_unexecuted_blocks=1 00:34:45.472 00:34:45.472 ' 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:45.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.472 --rc genhtml_branch_coverage=1 00:34:45.472 --rc genhtml_function_coverage=1 00:34:45.472 --rc genhtml_legend=1 00:34:45.472 --rc geninfo_all_blocks=1 00:34:45.472 --rc geninfo_unexecuted_blocks=1 00:34:45.472 00:34:45.472 ' 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:45.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.472 --rc genhtml_branch_coverage=1 00:34:45.472 --rc genhtml_function_coverage=1 00:34:45.472 --rc genhtml_legend=1 00:34:45.472 --rc geninfo_all_blocks=1 00:34:45.472 --rc geninfo_unexecuted_blocks=1 00:34:45.472 00:34:45.472 ' 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:45.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.472 --rc genhtml_branch_coverage=1 00:34:45.472 --rc genhtml_function_coverage=1 00:34:45.472 --rc genhtml_legend=1 00:34:45.472 --rc geninfo_all_blocks=1 00:34:45.472 --rc geninfo_unexecuted_blocks=1 00:34:45.472 00:34:45.472 ' 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:45.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:45.472 12:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.473 12:39:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:45.473 12:39:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1926965 00:34:45.473 12:39:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1926965 00:34:45.473 12:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1926965 ']' 00:34:45.473 12:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:45.473 12:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:45.473 12:39:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:45.473 12:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:45.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:45.473 12:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:45.473 12:39:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.473 [2024-11-04 12:39:19.915131] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:34:45.473 [2024-11-04 12:39:19.915235] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1926965 ] 00:34:45.473 [2024-11-04 12:39:19.980704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:45.473 [2024-11-04 12:39:20.028626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:45.473 [2024-11-04 12:39:20.028629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.415 12:39:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:46.415 12:39:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:34:46.415 12:39:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:46.415 12:39:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:46.415 12:39:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.415 12:39:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:46.415 12:39:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:46.415 12:39:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:46.415 12:39:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:46.415 12:39:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:46.415 12:39:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:46.415 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:46.415 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:46.415 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:46.415 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:46.415 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:46.415 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:46.415 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:46.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:46.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:46.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:46.415 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:46.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:46.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:46.415 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:46.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:46.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:46.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:46.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:46.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:46.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:46.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:46.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:46.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:46.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:46.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:46.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:46.415 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:46.415 ' 00:34:48.963 [2024-11-04 12:39:23.173767] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:49.906 [2024-11-04 12:39:24.381735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:52.451 [2024-11-04 12:39:26.600299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:54.365 [2024-11-04 12:39:28.506015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:55.747 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:55.747 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:55.747 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:55.747 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:55.747 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:55.747 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:55.747 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:55.747 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:55.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:55.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:55.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:55.747 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:55.748 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:55.748 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:55.748 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:55.748 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:55.748 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:55.748 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:55.748 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:55.748 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:55.748 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:55.748 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:55.748 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:55.748 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:55.748 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:55.748 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:55.748 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:55.748 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:55.748 12:39:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:55.748 12:39:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:55.748 12:39:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:55.748 12:39:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:55.748 12:39:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:55.748 12:39:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:55.748 12:39:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:55.748 12:39:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:56.008 12:39:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:56.008 12:39:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:56.008 12:39:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:56.008 12:39:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:56.008 12:39:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:56.008 12:39:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:56.008 12:39:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:56.008 12:39:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:56.269 12:39:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:56.269 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:56.269 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:56.269 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:56.269 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:56.269 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:56.269 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:56.269 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:56.269 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:56.269 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:56.269 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:56.269 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:56.269 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:56.269 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:56.269 ' 00:35:02.850 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:02.850 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:02.850 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:02.850 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:02.850 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:02.850 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:02.850 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:02.850 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:02.850 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:02.850 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:02.850 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:02.850 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:02.850 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:02.850 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1926965 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1926965 ']' 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1926965 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1926965 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1926965' 00:35:02.850 killing process with pid 1926965 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1926965 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1926965 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1926965 ']' 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1926965 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1926965 ']' 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1926965 00:35:02.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1926965) - No such process 00:35:02.850 12:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1926965 is not found' 00:35:02.851 Process with pid 1926965 is not found 00:35:02.851 12:39:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:02.851 12:39:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:02.851 12:39:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:02.851 00:35:02.851 real 0m16.862s 00:35:02.851 user 0m35.789s 00:35:02.851 sys 0m0.764s 00:35:02.851 12:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:02.851 12:39:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:02.851 ************************************ 00:35:02.851 END TEST spdkcli_nvmf_tcp 00:35:02.851 ************************************ 00:35:02.851 12:39:36 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:02.851 12:39:36 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:02.851 12:39:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:02.851 12:39:36 -- common/autotest_common.sh@10 -- # set +x 00:35:02.851 ************************************ 00:35:02.851 START TEST nvmf_identify_passthru 00:35:02.851 ************************************ 00:35:02.851 12:39:36 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:02.851 * Looking for test storage... 00:35:02.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:02.851 12:39:36 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:02.851 12:39:36 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:35:02.851 12:39:36 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:02.851 12:39:36 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:02.851 12:39:36 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:02.851 12:39:36 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:02.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.851 --rc genhtml_branch_coverage=1 00:35:02.851 --rc genhtml_function_coverage=1 00:35:02.851 --rc genhtml_legend=1 00:35:02.851 --rc geninfo_all_blocks=1 00:35:02.851 --rc geninfo_unexecuted_blocks=1 00:35:02.851 00:35:02.851 ' 00:35:02.851 12:39:36 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:02.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.851 --rc genhtml_branch_coverage=1 00:35:02.851 --rc genhtml_function_coverage=1 00:35:02.851 --rc genhtml_legend=1 00:35:02.851 --rc geninfo_all_blocks=1 00:35:02.851 --rc geninfo_unexecuted_blocks=1 00:35:02.851 00:35:02.851 ' 00:35:02.851 12:39:36 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:02.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.851 --rc genhtml_branch_coverage=1 00:35:02.851 --rc genhtml_function_coverage=1 00:35:02.851 --rc genhtml_legend=1 00:35:02.851 --rc geninfo_all_blocks=1 00:35:02.851 --rc geninfo_unexecuted_blocks=1 00:35:02.851 00:35:02.851 ' 00:35:02.851 12:39:36 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:02.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.851 --rc genhtml_branch_coverage=1 00:35:02.851 --rc genhtml_function_coverage=1 00:35:02.851 --rc genhtml_legend=1 00:35:02.851 --rc geninfo_all_blocks=1 00:35:02.851 --rc geninfo_unexecuted_blocks=1 00:35:02.851 00:35:02.851 ' 00:35:02.851 12:39:36 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:02.851 12:39:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.851 12:39:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.851 12:39:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.851 12:39:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:02.851 12:39:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:02.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:02.851 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:02.851 12:39:36 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:02.851 12:39:36 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:02.852 12:39:36 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:02.852 12:39:36 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:02.852 12:39:36 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.852 12:39:36 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.852 12:39:36 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.852 12:39:36 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:02.852 12:39:36 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.852 12:39:36 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:02.852 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:02.852 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:02.852 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:02.852 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:02.852 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:02.852 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.852 12:39:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:02.852 12:39:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:02.852 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:02.852 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:02.852 12:39:36 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:02.852 12:39:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:09.437 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:09.437 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:09.437 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:09.437 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:09.437 12:39:43 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:09.437 12:39:44 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:09.699 12:39:44 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:09.699 12:39:44 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:09.699 12:39:44 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:09.699 12:39:44 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:09.699 12:39:44 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:09.699 12:39:44 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:09.699 12:39:44 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:09.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:09.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:35:09.699 00:35:09.699 --- 10.0.0.2 ping statistics --- 00:35:09.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:09.699 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:35:09.699 12:39:44 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:09.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:09.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:35:09.699 00:35:09.699 --- 10.0.0.1 ping statistics --- 00:35:09.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:09.699 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:35:09.699 12:39:44 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:09.699 12:39:44 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:35:09.699 12:39:44 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:09.699 12:39:44 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:09.699 12:39:44 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:09.699 12:39:44 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:09.699 12:39:44 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:09.699 12:39:44 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:09.699 12:39:44 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:09.699 12:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:09.699 12:39:44 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:09.699 12:39:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:09.699 12:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:09.699 12:39:44 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:35:09.699 12:39:44 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:35:09.699 12:39:44 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:35:09.699 12:39:44 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:35:09.699 12:39:44 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:35:09.699 12:39:44 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:35:09.699 12:39:44 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:09.699 12:39:44 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:09.699 12:39:44 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:35:09.960 12:39:44 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:35:09.960 12:39:44 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:35:09.960 12:39:44 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:35:09.960 12:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:09.960 12:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:09.960 12:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:09.960 12:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:09.960 12:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:10.220 12:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:35:10.220 12:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:10.220 12:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:10.220 12:39:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:10.793 12:39:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:10.793 12:39:45 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:10.793 12:39:45 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:10.793 12:39:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.793 12:39:45 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:10.793 12:39:45 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:10.793 12:39:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.793 12:39:45 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1934120 00:35:10.793 12:39:45 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:10.793 12:39:45 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:10.793 12:39:45 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1934120 00:35:10.793 12:39:45 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1934120 ']' 00:35:10.793 12:39:45 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:10.793 12:39:45 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:10.793 12:39:45 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:10.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:10.793 12:39:45 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:10.793 12:39:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.793 [2024-11-04 12:39:45.344860] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:35:10.793 [2024-11-04 12:39:45.344917] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:11.054 [2024-11-04 12:39:45.411464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:11.054 [2024-11-04 12:39:45.448871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:11.054 [2024-11-04 12:39:45.448906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:11.054 [2024-11-04 12:39:45.448914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:11.054 [2024-11-04 12:39:45.448920] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:11.054 [2024-11-04 12:39:45.448926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:11.054 [2024-11-04 12:39:45.450654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.054 [2024-11-04 12:39:45.450868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:11.054 [2024-11-04 12:39:45.450868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:11.054 [2024-11-04 12:39:45.450772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:11.623 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:11.624 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:35:11.624 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:11.624 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.624 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.624 INFO: Log level set to 20 00:35:11.624 INFO: Requests: 00:35:11.624 { 00:35:11.624 "jsonrpc": "2.0", 00:35:11.624 "method": "nvmf_set_config", 00:35:11.624 "id": 1, 00:35:11.624 "params": { 00:35:11.624 "admin_cmd_passthru": { 00:35:11.624 "identify_ctrlr": true 00:35:11.624 } 00:35:11.624 } 00:35:11.624 } 00:35:11.624 00:35:11.624 INFO: response: 00:35:11.624 { 00:35:11.624 "jsonrpc": "2.0", 00:35:11.624 "id": 1, 00:35:11.624 "result": true 00:35:11.624 } 00:35:11.624 00:35:11.624 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.624 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:11.624 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.624 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.624 INFO: Setting log level to 20 00:35:11.624 INFO: Setting log level to 20 00:35:11.624 INFO: Log level set to 20 00:35:11.624 INFO: Log level set to 20 00:35:11.624 INFO: Requests: 00:35:11.624 { 00:35:11.624 "jsonrpc": "2.0", 00:35:11.624 "method": "framework_start_init", 00:35:11.624 "id": 1 00:35:11.624 } 00:35:11.624 00:35:11.624 INFO: Requests: 00:35:11.624 { 00:35:11.624 "jsonrpc": "2.0", 00:35:11.624 "method": "framework_start_init", 00:35:11.624 "id": 1 00:35:11.624 } 00:35:11.624 00:35:11.883 [2024-11-04 12:39:46.211374] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:11.883 INFO: response: 00:35:11.883 { 00:35:11.883 "jsonrpc": "2.0", 00:35:11.883 "id": 1, 00:35:11.883 "result": true 00:35:11.883 } 00:35:11.883 00:35:11.883 INFO: response: 00:35:11.883 { 00:35:11.883 "jsonrpc": "2.0", 00:35:11.883 "id": 1, 00:35:11.883 "result": true 00:35:11.883 } 00:35:11.883 00:35:11.883 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.883 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:11.883 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.883 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.883 INFO: Setting log level to 40 00:35:11.883 INFO: Setting log level to 40 00:35:11.883 INFO: Setting log level to 40 00:35:11.883 [2024-11-04 12:39:46.224697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:11.883 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.884 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:11.884 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:11.884 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.884 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:11.884 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.884 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.144 Nvme0n1 00:35:12.144 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.144 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:12.144 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.144 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.144 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.144 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:12.144 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.144 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.144 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.144 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:12.144 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.144 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.144 [2024-11-04 12:39:46.608997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:12.144 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.144 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:12.144 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.144 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.144 [ 00:35:12.144 { 00:35:12.144 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:12.144 "subtype": "Discovery", 00:35:12.144 "listen_addresses": [], 00:35:12.144 "allow_any_host": true, 00:35:12.144 "hosts": [] 00:35:12.144 }, 00:35:12.144 { 00:35:12.144 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:12.144 "subtype": "NVMe", 00:35:12.144 "listen_addresses": [ 00:35:12.144 { 00:35:12.144 "trtype": "TCP", 00:35:12.144 "adrfam": "IPv4", 00:35:12.144 "traddr": "10.0.0.2", 00:35:12.144 "trsvcid": "4420" 00:35:12.144 } 00:35:12.144 ], 00:35:12.144 "allow_any_host": true, 00:35:12.144 "hosts": [], 00:35:12.144 "serial_number": "SPDK00000000000001", 00:35:12.144 "model_number": "SPDK bdev Controller", 00:35:12.144 "max_namespaces": 1, 00:35:12.144 "min_cntlid": 1, 00:35:12.144 "max_cntlid": 65519, 00:35:12.144 "namespaces": [ 00:35:12.144 { 00:35:12.144 "nsid": 1, 00:35:12.144 "bdev_name": "Nvme0n1", 00:35:12.144 "name": "Nvme0n1", 00:35:12.144 "nguid": "36344730526054870025384500000044", 00:35:12.144 "uuid": "36344730-5260-5487-0025-384500000044" 00:35:12.144 } 00:35:12.144 ] 00:35:12.144 } 00:35:12.144 ] 00:35:12.144 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.144 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:12.144 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:12.144 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:12.406 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:35:12.406 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:12.406 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:12.406 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:12.406 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:12.406 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:35:12.406 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:12.406 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:12.406 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.406 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.406 12:39:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.406 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:12.406 12:39:46 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:12.406 12:39:46 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:12.406 12:39:46 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:12.406 12:39:46 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:12.406 12:39:46 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:12.406 12:39:46 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:12.406 12:39:46 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:12.406 rmmod nvme_tcp 00:35:12.406 rmmod nvme_fabrics 00:35:12.666 rmmod nvme_keyring 00:35:12.666 12:39:47 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:12.666 12:39:47 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:12.667 12:39:47 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:12.667 12:39:47 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 1934120 ']' 00:35:12.667 12:39:47 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 1934120 00:35:12.667 12:39:47 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1934120 ']' 00:35:12.667 12:39:47 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1934120 00:35:12.667 12:39:47 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:35:12.667 12:39:47 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:12.667 12:39:47 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1934120 00:35:12.667 12:39:47 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:12.667 12:39:47 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:12.667 12:39:47 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1934120' 00:35:12.667 killing process with pid 1934120 00:35:12.667 12:39:47 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1934120 00:35:12.667 12:39:47 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1934120 00:35:12.927 12:39:47 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:12.927 12:39:47 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:12.927 12:39:47 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:12.927 12:39:47 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:12.927 12:39:47 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:35:12.927 12:39:47 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:12.927 12:39:47 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:35:12.927 12:39:47 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:12.927 12:39:47 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:12.927 12:39:47 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:12.927 12:39:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:12.927 12:39:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:15.473 12:39:49 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:15.473 00:35:15.473 real 0m12.877s 00:35:15.473 user 0m9.723s 00:35:15.473 sys 0m6.620s 00:35:15.473 12:39:49 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:15.473 12:39:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:15.473 ************************************ 00:35:15.473 END TEST nvmf_identify_passthru 00:35:15.473 ************************************ 00:35:15.473 12:39:49 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:15.473 12:39:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:15.473 12:39:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:15.473 12:39:49 -- common/autotest_common.sh@10 -- # set +x 00:35:15.473 ************************************ 00:35:15.473 START TEST nvmf_dif 00:35:15.473 ************************************ 00:35:15.473 12:39:49 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:15.473 * Looking for test storage... 00:35:15.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:15.473 12:39:49 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:15.473 12:39:49 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:35:15.473 12:39:49 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:15.473 12:39:49 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:15.473 12:39:49 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:15.473 12:39:49 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:15.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.473 --rc genhtml_branch_coverage=1 00:35:15.473 --rc genhtml_function_coverage=1 00:35:15.473 --rc genhtml_legend=1 00:35:15.473 --rc geninfo_all_blocks=1 00:35:15.473 --rc geninfo_unexecuted_blocks=1 00:35:15.473 00:35:15.473 ' 00:35:15.473 12:39:49 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:15.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.473 --rc genhtml_branch_coverage=1 00:35:15.473 --rc genhtml_function_coverage=1 00:35:15.473 --rc genhtml_legend=1 00:35:15.473 --rc geninfo_all_blocks=1 00:35:15.473 --rc geninfo_unexecuted_blocks=1 00:35:15.473 00:35:15.473 ' 00:35:15.473 12:39:49 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:15.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.473 --rc genhtml_branch_coverage=1 00:35:15.473 --rc genhtml_function_coverage=1 00:35:15.473 --rc genhtml_legend=1 00:35:15.473 --rc geninfo_all_blocks=1 00:35:15.473 --rc geninfo_unexecuted_blocks=1 00:35:15.473 00:35:15.473 ' 00:35:15.473 12:39:49 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:15.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.473 --rc genhtml_branch_coverage=1 00:35:15.473 --rc genhtml_function_coverage=1 00:35:15.473 --rc genhtml_legend=1 00:35:15.473 --rc geninfo_all_blocks=1 00:35:15.473 --rc geninfo_unexecuted_blocks=1 00:35:15.473 00:35:15.473 ' 00:35:15.473 12:39:49 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:15.473 12:39:49 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:15.473 12:39:49 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:15.473 12:39:49 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:15.473 12:39:49 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:15.473 12:39:49 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:15.473 12:39:49 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:15.473 12:39:49 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:15.473 12:39:49 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:15.473 12:39:49 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:15.473 12:39:49 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:15.473 12:39:49 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:15.473 12:39:49 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:15.473 12:39:49 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:15.473 12:39:49 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:15.473 12:39:49 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:15.473 12:39:49 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:15.473 12:39:49 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:15.473 12:39:49 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:15.473 12:39:49 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:15.474 12:39:49 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.474 12:39:49 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.474 12:39:49 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.474 12:39:49 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:15.474 12:39:49 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.474 12:39:49 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:15.474 12:39:49 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:15.474 12:39:49 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:15.474 12:39:49 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:15.474 12:39:49 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:15.474 12:39:49 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:15.474 12:39:49 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:15.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:15.474 12:39:49 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:15.474 12:39:49 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:15.474 12:39:49 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:15.474 12:39:49 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:15.474 12:39:49 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:15.474 12:39:49 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:15.474 12:39:49 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:15.474 12:39:49 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:15.474 12:39:49 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:15.474 12:39:49 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:15.474 12:39:49 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:15.474 12:39:49 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:15.474 12:39:49 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:15.474 12:39:49 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:15.474 12:39:49 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:15.474 12:39:49 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:15.474 12:39:49 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:15.474 12:39:49 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:15.474 12:39:49 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:15.474 12:39:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:22.064 12:39:56 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:22.065 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:22.065 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:22.065 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:22.065 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:22.065 12:39:56 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:22.327 12:39:56 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:22.327 12:39:56 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:22.327 12:39:56 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:22.327 12:39:56 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:22.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:22.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:35:22.327 00:35:22.327 --- 10.0.0.2 ping statistics --- 00:35:22.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:22.327 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:35:22.327 12:39:56 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:22.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:22.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:35:22.327 00:35:22.327 --- 10.0.0.1 ping statistics --- 00:35:22.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:22.327 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:35:22.327 12:39:56 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:22.327 12:39:56 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:35:22.327 12:39:56 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:35:22.327 12:39:56 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:24.876 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:24.876 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:24.876 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:24.876 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:24.876 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:24.876 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:24.876 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:24.876 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:24.876 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:24.876 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:24.876 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:24.876 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:24.876 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:24.876 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:24.876 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:24.876 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:24.876 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:25.446 12:39:59 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:25.446 12:39:59 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:25.446 12:39:59 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:25.446 12:39:59 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:25.446 12:39:59 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:25.447 12:39:59 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:25.447 12:39:59 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:25.447 12:39:59 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:25.447 12:39:59 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:25.447 12:39:59 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:25.447 12:39:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.447 12:39:59 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=1939984 00:35:25.447 12:39:59 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 1939984 00:35:25.447 12:39:59 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:25.447 12:39:59 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1939984 ']' 00:35:25.447 12:39:59 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:25.447 12:39:59 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:25.447 12:39:59 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:25.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:25.447 12:39:59 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:25.447 12:39:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.447 [2024-11-04 12:39:59.894158] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:35:25.447 [2024-11-04 12:39:59.894240] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:25.447 [2024-11-04 12:39:59.965674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.447 [2024-11-04 12:40:00.008684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:25.447 [2024-11-04 12:40:00.008722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:25.447 [2024-11-04 12:40:00.008730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:25.447 [2024-11-04 12:40:00.008737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:25.447 [2024-11-04 12:40:00.008743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:25.447 [2024-11-04 12:40:00.009379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:26.389 12:40:00 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:26.389 12:40:00 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:35:26.389 12:40:00 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:26.389 12:40:00 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:26.389 12:40:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:26.389 12:40:00 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:26.389 12:40:00 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:26.389 12:40:00 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:26.389 12:40:00 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.389 12:40:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:26.389 [2024-11-04 12:40:00.726478] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:26.389 12:40:00 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.389 12:40:00 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:26.389 12:40:00 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:26.389 12:40:00 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:26.389 12:40:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:26.389 ************************************ 00:35:26.389 START TEST fio_dif_1_default 00:35:26.389 ************************************ 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:26.389 bdev_null0 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:26.389 [2024-11-04 12:40:00.814854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:26.389 12:40:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:26.389 { 00:35:26.389 "params": { 00:35:26.389 "name": "Nvme$subsystem", 00:35:26.389 "trtype": "$TEST_TRANSPORT", 00:35:26.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:26.389 "adrfam": "ipv4", 00:35:26.389 "trsvcid": "$NVMF_PORT", 00:35:26.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:26.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:26.390 "hdgst": ${hdgst:-false}, 00:35:26.390 "ddgst": ${ddgst:-false} 00:35:26.390 }, 00:35:26.390 "method": "bdev_nvme_attach_controller" 00:35:26.390 } 00:35:26.390 EOF 00:35:26.390 )") 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:26.390 "params": { 00:35:26.390 "name": "Nvme0", 00:35:26.390 "trtype": "tcp", 00:35:26.390 "traddr": "10.0.0.2", 00:35:26.390 "adrfam": "ipv4", 00:35:26.390 "trsvcid": "4420", 00:35:26.390 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:26.390 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:26.390 "hdgst": false, 00:35:26.390 "ddgst": false 00:35:26.390 }, 00:35:26.390 "method": "bdev_nvme_attach_controller" 00:35:26.390 }' 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:26.390 12:40:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:26.968 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:26.968 fio-3.35 00:35:26.968 Starting 1 thread 00:35:39.201 00:35:39.201 filename0: (groupid=0, jobs=1): err= 0: pid=1940507: Mon Nov 4 12:40:11 2024 00:35:39.201 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10026msec) 00:35:39.201 slat (nsec): min=5658, max=31925, avg=6613.36, stdev=1644.95 00:35:39.201 clat (usec): min=40904, max=44140, avg=41068.88, stdev=346.68 00:35:39.201 lat (usec): min=40913, max=44172, avg=41075.50, stdev=347.05 00:35:39.201 clat percentiles (usec): 00:35:39.201 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:39.201 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:39.201 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:35:39.201 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:35:39.201 | 99.99th=[44303] 00:35:39.201 bw ( KiB/s): min= 384, max= 416, per=99.64%, avg=388.80, stdev=11.72, samples=20 00:35:39.201 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:39.201 lat (msec) : 50=100.00% 00:35:39.201 cpu : usr=93.34%, sys=6.44%, ctx=11, majf=0, minf=228 00:35:39.201 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:39.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.201 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.201 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:39.201 00:35:39.201 Run status group 0 (all jobs): 00:35:39.201 READ: bw=389KiB/s (399kB/s), 389KiB/s-389KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10026-10026msec 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.201 00:35:39.201 real 0m11.101s 00:35:39.201 user 0m24.632s 00:35:39.201 sys 0m0.969s 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:39.201 ************************************ 00:35:39.201 END TEST fio_dif_1_default 00:35:39.201 ************************************ 00:35:39.201 12:40:11 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:39.201 12:40:11 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:39.201 12:40:11 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:39.201 12:40:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:39.201 ************************************ 00:35:39.201 START TEST fio_dif_1_multi_subsystems 00:35:39.201 ************************************ 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.201 bdev_null0 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.201 [2024-11-04 12:40:11.994325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.201 12:40:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.201 bdev_null1 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:39.201 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:39.201 { 00:35:39.201 "params": { 00:35:39.201 "name": "Nvme$subsystem", 00:35:39.201 "trtype": "$TEST_TRANSPORT", 00:35:39.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:39.201 "adrfam": "ipv4", 00:35:39.201 "trsvcid": "$NVMF_PORT", 00:35:39.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:39.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:39.202 "hdgst": ${hdgst:-false}, 00:35:39.202 "ddgst": ${ddgst:-false} 00:35:39.202 }, 00:35:39.202 "method": "bdev_nvme_attach_controller" 00:35:39.202 } 00:35:39.202 EOF 00:35:39.202 )") 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:39.202 { 00:35:39.202 "params": { 00:35:39.202 "name": "Nvme$subsystem", 00:35:39.202 "trtype": "$TEST_TRANSPORT", 00:35:39.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:39.202 "adrfam": "ipv4", 00:35:39.202 "trsvcid": "$NVMF_PORT", 00:35:39.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:39.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:39.202 "hdgst": ${hdgst:-false}, 00:35:39.202 "ddgst": ${ddgst:-false} 00:35:39.202 }, 00:35:39.202 "method": "bdev_nvme_attach_controller" 00:35:39.202 } 00:35:39.202 EOF 00:35:39.202 )") 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:39.202 "params": { 00:35:39.202 "name": "Nvme0", 00:35:39.202 "trtype": "tcp", 00:35:39.202 "traddr": "10.0.0.2", 00:35:39.202 "adrfam": "ipv4", 00:35:39.202 "trsvcid": "4420", 00:35:39.202 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:39.202 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:39.202 "hdgst": false, 00:35:39.202 "ddgst": false 00:35:39.202 }, 00:35:39.202 "method": "bdev_nvme_attach_controller" 00:35:39.202 },{ 00:35:39.202 "params": { 00:35:39.202 "name": "Nvme1", 00:35:39.202 "trtype": "tcp", 00:35:39.202 "traddr": "10.0.0.2", 00:35:39.202 "adrfam": "ipv4", 00:35:39.202 "trsvcid": "4420", 00:35:39.202 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:39.202 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:39.202 "hdgst": false, 00:35:39.202 "ddgst": false 00:35:39.202 }, 00:35:39.202 "method": "bdev_nvme_attach_controller" 00:35:39.202 }' 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:39.202 12:40:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:39.202 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:39.202 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:39.202 fio-3.35 00:35:39.202 Starting 2 threads 00:35:49.217 00:35:49.217 filename0: (groupid=0, jobs=1): err= 0: pid=1942850: Mon Nov 4 12:40:23 2024 00:35:49.217 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10037msec) 00:35:49.217 slat (nsec): min=5659, max=33534, avg=6674.49, stdev=1783.98 00:35:49.217 clat (usec): min=40885, max=42999, avg=41455.15, stdev=578.49 00:35:49.217 lat (usec): min=40894, max=43005, avg=41461.83, stdev=578.45 00:35:49.217 clat percentiles (usec): 00:35:49.217 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:49.217 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:35:49.217 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:49.217 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:35:49.217 | 99.99th=[43254] 00:35:49.217 bw ( KiB/s): min= 352, max= 416, per=40.32%, avg=385.60, stdev=12.61, samples=20 00:35:49.217 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:35:49.217 lat (msec) : 50=100.00% 00:35:49.217 cpu : usr=95.59%, sys=4.21%, ctx=9, majf=0, minf=205 00:35:49.217 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.217 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.217 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:49.217 filename1: (groupid=0, jobs=1): err= 0: pid=1942851: Mon Nov 4 12:40:23 2024 00:35:49.217 read: IOPS=142, BW=570KiB/s (583kB/s)(5712KiB/10027msec) 00:35:49.217 slat (nsec): min=5653, max=32563, avg=6778.59, stdev=1527.63 00:35:49.217 clat (usec): min=749, max=42950, avg=28067.52, stdev=18859.51 00:35:49.217 lat (usec): min=755, max=42956, avg=28074.30, stdev=18859.25 00:35:49.217 clat percentiles (usec): 00:35:49.217 | 1.00th=[ 816], 5.00th=[ 857], 10.00th=[ 873], 20.00th=[ 906], 00:35:49.217 | 30.00th=[ 947], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:49.217 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:35:49.217 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:35:49.217 | 99.99th=[42730] 00:35:49.217 bw ( KiB/s): min= 352, max= 768, per=59.59%, avg=569.60, stdev=181.79, samples=20 00:35:49.217 iops : min= 88, max= 192, avg=142.40, stdev=45.45, samples=20 00:35:49.217 lat (usec) : 750=0.07%, 1000=31.86% 00:35:49.217 lat (msec) : 2=0.56%, 50=67.51% 00:35:49.217 cpu : usr=95.84%, sys=3.96%, ctx=7, majf=0, minf=64 00:35:49.217 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.217 issued rwts: total=1428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.217 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:49.217 00:35:49.217 Run status group 0 (all jobs): 00:35:49.217 READ: bw=955KiB/s (978kB/s), 386KiB/s-570KiB/s (395kB/s-583kB/s), io=9584KiB (9814kB), run=10027-10037msec 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.217 00:35:49.217 real 0m11.339s 00:35:49.217 user 0m37.753s 00:35:49.217 sys 0m1.128s 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:49.217 12:40:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:49.217 ************************************ 00:35:49.217 END TEST fio_dif_1_multi_subsystems 00:35:49.217 ************************************ 00:35:49.217 12:40:23 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:49.217 12:40:23 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:49.217 12:40:23 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:49.217 12:40:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:49.217 ************************************ 00:35:49.217 START TEST fio_dif_rand_params 00:35:49.217 ************************************ 00:35:49.217 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.218 bdev_null0 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.218 [2024-11-04 12:40:23.412719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:49.218 { 00:35:49.218 "params": { 00:35:49.218 "name": "Nvme$subsystem", 00:35:49.218 "trtype": "$TEST_TRANSPORT", 00:35:49.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:49.218 "adrfam": "ipv4", 00:35:49.218 "trsvcid": "$NVMF_PORT", 00:35:49.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:49.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:49.218 "hdgst": ${hdgst:-false}, 00:35:49.218 "ddgst": ${ddgst:-false} 00:35:49.218 }, 00:35:49.218 "method": "bdev_nvme_attach_controller" 00:35:49.218 } 00:35:49.218 EOF 00:35:49.218 )") 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:49.218 "params": { 00:35:49.218 "name": "Nvme0", 00:35:49.218 "trtype": "tcp", 00:35:49.218 "traddr": "10.0.0.2", 00:35:49.218 "adrfam": "ipv4", 00:35:49.218 "trsvcid": "4420", 00:35:49.218 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:49.218 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:49.218 "hdgst": false, 00:35:49.218 "ddgst": false 00:35:49.218 }, 00:35:49.218 "method": "bdev_nvme_attach_controller" 00:35:49.218 }' 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:49.218 12:40:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.481 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:49.481 ... 00:35:49.481 fio-3.35 00:35:49.481 Starting 3 threads 00:35:56.067 00:35:56.067 filename0: (groupid=0, jobs=1): err= 0: pid=1945048: Mon Nov 4 12:40:29 2024 00:35:56.067 read: IOPS=149, BW=18.6MiB/s (19.6MB/s)(94.1MiB/5047msec) 00:35:56.067 slat (nsec): min=8295, max=32257, avg=9035.08, stdev=974.79 00:35:56.067 clat (msec): min=6, max=131, avg=20.04, stdev=19.22 00:35:56.067 lat (msec): min=6, max=131, avg=20.04, stdev=19.22 00:35:56.067 clat percentiles (msec): 00:35:56.067 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:35:56.067 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:35:56.067 | 70.00th=[ 14], 80.00th=[ 47], 90.00th=[ 52], 95.00th=[ 54], 00:35:56.067 | 99.00th=[ 92], 99.50th=[ 93], 99.90th=[ 132], 99.95th=[ 132], 00:35:56.067 | 99.99th=[ 132] 00:35:56.067 bw ( KiB/s): min=13029, max=24064, per=22.64%, avg=19222.90, stdev=3893.66, samples=10 00:35:56.067 iops : min= 101, max= 188, avg=150.10, stdev=30.56, samples=10 00:35:56.067 lat (msec) : 10=23.37%, 20=56.57%, 50=5.18%, 100=14.74%, 250=0.13% 00:35:56.067 cpu : usr=96.00%, sys=3.73%, ctx=21, majf=0, minf=50 00:35:56.067 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:56.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:56.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:56.067 issued rwts: total=753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:56.067 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:56.067 filename0: (groupid=0, jobs=1): err= 0: pid=1945049: Mon Nov 4 12:40:29 2024 00:35:56.067 read: IOPS=254, BW=31.9MiB/s (33.4MB/s)(161MiB/5044msec) 00:35:56.067 slat (nsec): min=5913, max=64222, avg=8838.21, stdev=2165.82 00:35:56.067 clat (usec): min=5412, max=52818, avg=11723.27, stdev=7331.39 00:35:56.067 lat (usec): min=5421, max=52827, avg=11732.11, stdev=7331.54 00:35:56.067 clat percentiles (usec): 00:35:56.067 | 1.00th=[ 5932], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 8094], 00:35:56.067 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[10945], 00:35:56.067 | 70.00th=[11863], 80.00th=[13042], 90.00th=[14353], 95.00th=[15926], 00:35:56.067 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51643], 99.95th=[52691], 00:35:56.067 | 99.99th=[52691] 00:35:56.067 bw ( KiB/s): min=26368, max=37632, per=38.71%, avg=32870.40, stdev=4021.00, samples=10 00:35:56.067 iops : min= 206, max= 294, avg=256.80, stdev=31.41, samples=10 00:35:56.067 lat (msec) : 10=46.11%, 20=50.47%, 50=2.88%, 100=0.54% 00:35:56.067 cpu : usr=95.50%, sys=4.26%, ctx=9, majf=0, minf=108 00:35:56.067 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:56.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:56.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:56.067 issued rwts: total=1286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:56.067 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:56.067 filename0: (groupid=0, jobs=1): err= 0: pid=1945050: Mon Nov 4 12:40:29 2024 00:35:56.067 read: IOPS=259, BW=32.4MiB/s (34.0MB/s)(164MiB/5046msec) 00:35:56.067 slat (nsec): min=5861, max=31639, avg=8476.06, stdev=1511.24 00:35:56.067 clat (usec): min=5248, max=90684, avg=11519.87, stdev=7777.59 00:35:56.067 lat (usec): min=5256, max=90693, avg=11528.35, stdev=7777.79 00:35:56.067 clat percentiles (usec): 00:35:56.067 | 1.00th=[ 5800], 5.00th=[ 6521], 10.00th=[ 7046], 20.00th=[ 7963], 00:35:56.067 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10683], 00:35:56.067 | 70.00th=[11731], 80.00th=[12780], 90.00th=[13960], 95.00th=[15139], 00:35:56.067 | 99.00th=[49546], 99.50th=[50594], 99.90th=[52167], 99.95th=[90702], 00:35:56.067 | 99.99th=[90702] 00:35:56.067 bw ( KiB/s): min=24064, max=39680, per=39.40%, avg=33459.20, stdev=5363.18, samples=10 00:35:56.067 iops : min= 188, max= 310, avg=261.40, stdev=41.90, samples=10 00:35:56.067 lat (msec) : 10=52.25%, 20=44.23%, 50=2.60%, 100=0.92% 00:35:56.067 cpu : usr=94.83%, sys=4.92%, ctx=20, majf=0, minf=112 00:35:56.067 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:56.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:56.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:56.067 issued rwts: total=1309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:56.067 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:56.067 00:35:56.067 Run status group 0 (all jobs): 00:35:56.067 READ: bw=82.9MiB/s (86.9MB/s), 18.6MiB/s-32.4MiB/s (19.6MB/s-34.0MB/s), io=419MiB (439MB), run=5044-5047msec 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.067 bdev_null0 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.067 [2024-11-04 12:40:29.605437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.067 bdev_null1 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:56.067 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.068 bdev_null2 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:56.068 { 00:35:56.068 "params": { 00:35:56.068 "name": "Nvme$subsystem", 00:35:56.068 "trtype": "$TEST_TRANSPORT", 00:35:56.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:56.068 "adrfam": "ipv4", 00:35:56.068 "trsvcid": "$NVMF_PORT", 00:35:56.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:56.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:56.068 "hdgst": ${hdgst:-false}, 00:35:56.068 "ddgst": ${ddgst:-false} 00:35:56.068 }, 00:35:56.068 "method": "bdev_nvme_attach_controller" 00:35:56.068 } 00:35:56.068 EOF 00:35:56.068 )") 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:56.068 { 00:35:56.068 "params": { 00:35:56.068 "name": "Nvme$subsystem", 00:35:56.068 "trtype": "$TEST_TRANSPORT", 00:35:56.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:56.068 "adrfam": "ipv4", 00:35:56.068 "trsvcid": "$NVMF_PORT", 00:35:56.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:56.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:56.068 "hdgst": ${hdgst:-false}, 00:35:56.068 "ddgst": ${ddgst:-false} 00:35:56.068 }, 00:35:56.068 "method": "bdev_nvme_attach_controller" 00:35:56.068 } 00:35:56.068 EOF 00:35:56.068 )") 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:56.068 { 00:35:56.068 "params": { 00:35:56.068 "name": "Nvme$subsystem", 00:35:56.068 "trtype": "$TEST_TRANSPORT", 00:35:56.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:56.068 "adrfam": "ipv4", 00:35:56.068 "trsvcid": "$NVMF_PORT", 00:35:56.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:56.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:56.068 "hdgst": ${hdgst:-false}, 00:35:56.068 "ddgst": ${ddgst:-false} 00:35:56.068 }, 00:35:56.068 "method": "bdev_nvme_attach_controller" 00:35:56.068 } 00:35:56.068 EOF 00:35:56.068 )") 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:56.068 "params": { 00:35:56.068 "name": "Nvme0", 00:35:56.068 "trtype": "tcp", 00:35:56.068 "traddr": "10.0.0.2", 00:35:56.068 "adrfam": "ipv4", 00:35:56.068 "trsvcid": "4420", 00:35:56.068 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:56.068 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:56.068 "hdgst": false, 00:35:56.068 "ddgst": false 00:35:56.068 }, 00:35:56.068 "method": "bdev_nvme_attach_controller" 00:35:56.068 },{ 00:35:56.068 "params": { 00:35:56.068 "name": "Nvme1", 00:35:56.068 "trtype": "tcp", 00:35:56.068 "traddr": "10.0.0.2", 00:35:56.068 "adrfam": "ipv4", 00:35:56.068 "trsvcid": "4420", 00:35:56.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:56.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:56.068 "hdgst": false, 00:35:56.068 "ddgst": false 00:35:56.068 }, 00:35:56.068 "method": "bdev_nvme_attach_controller" 00:35:56.068 },{ 00:35:56.068 "params": { 00:35:56.068 "name": "Nvme2", 00:35:56.068 "trtype": "tcp", 00:35:56.068 "traddr": "10.0.0.2", 00:35:56.068 "adrfam": "ipv4", 00:35:56.068 "trsvcid": "4420", 00:35:56.068 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:56.068 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:56.068 "hdgst": false, 00:35:56.068 "ddgst": false 00:35:56.068 }, 00:35:56.068 "method": "bdev_nvme_attach_controller" 00:35:56.068 }' 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:56.068 12:40:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:56.068 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:56.068 ... 00:35:56.068 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:56.068 ... 00:35:56.068 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:56.068 ... 00:35:56.068 fio-3.35 00:35:56.068 Starting 24 threads 00:36:08.403 00:36:08.403 filename0: (groupid=0, jobs=1): err= 0: pid=1946549: Mon Nov 4 12:40:41 2024 00:36:08.403 read: IOPS=636, BW=2547KiB/s (2608kB/s)(24.9MiB/10006msec) 00:36:08.403 slat (nsec): min=5818, max=51553, avg=7015.52, stdev=2653.08 00:36:08.403 clat (usec): min=3012, max=46218, avg=25081.91, stdev=7411.41 00:36:08.403 lat (usec): min=3032, max=46225, avg=25088.92, stdev=7411.18 00:36:08.403 clat percentiles (usec): 00:36:08.403 | 1.00th=[ 4080], 5.00th=[ 9372], 10.00th=[17433], 20.00th=[18744], 00:36:08.403 | 30.00th=[20579], 40.00th=[22152], 50.00th=[26870], 60.00th=[28705], 00:36:08.403 | 70.00th=[30802], 80.00th=[32375], 90.00th=[33162], 95.00th=[33817], 00:36:08.403 | 99.00th=[35390], 99.50th=[36439], 99.90th=[45876], 99.95th=[46400], 00:36:08.403 | 99.99th=[46400] 00:36:08.403 bw ( KiB/s): min= 1932, max= 3568, per=5.40%, avg=2568.21, stdev=401.57, samples=19 00:36:08.403 iops : min= 483, max= 892, avg=642.05, stdev=100.39, samples=19 00:36:08.403 lat (msec) : 4=0.89%, 10=4.60%, 20=21.58%, 50=72.93% 00:36:08.403 cpu : usr=99.10%, sys=0.62%, ctx=16, majf=0, minf=91 00:36:08.403 IO depths : 1=0.9%, 2=2.6%, 4=11.3%, 8=73.2%, 16=11.9%, 32=0.0%, >=64=0.0% 00:36:08.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.403 complete : 0=0.0%, 4=90.4%, 8=4.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.403 issued rwts: total=6372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.403 filename0: (groupid=0, jobs=1): err= 0: pid=1946550: Mon Nov 4 12:40:41 2024 00:36:08.403 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.3MiB/10015msec) 00:36:08.403 slat (usec): min=5, max=132, avg=20.04, stdev=18.33 00:36:08.403 clat (usec): min=15942, max=60174, avg=32319.07, stdev=5155.38 00:36:08.403 lat (usec): min=15962, max=60196, avg=32339.11, stdev=5157.80 00:36:08.403 clat percentiles (usec): 00:36:08.403 | 1.00th=[17695], 5.00th=[22152], 10.00th=[25297], 20.00th=[31589], 00:36:08.403 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:08.403 | 70.00th=[33424], 80.00th=[33817], 90.00th=[35390], 95.00th=[38536], 00:36:08.403 | 99.00th=[52691], 99.50th=[57410], 99.90th=[60031], 99.95th=[60031], 00:36:08.403 | 99.99th=[60031] 00:36:08.403 bw ( KiB/s): min= 1792, max= 2144, per=4.14%, avg=1971.20, stdev=83.15, samples=20 00:36:08.403 iops : min= 448, max= 536, avg=492.80, stdev=20.79, samples=20 00:36:08.403 lat (msec) : 20=2.11%, 50=96.60%, 100=1.30% 00:36:08.403 cpu : usr=98.95%, sys=0.74%, ctx=51, majf=0, minf=54 00:36:08.403 IO depths : 1=3.9%, 2=7.7%, 4=17.7%, 8=61.8%, 16=8.9%, 32=0.0%, >=64=0.0% 00:36:08.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.403 complete : 0=0.0%, 4=92.1%, 8=2.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.403 issued rwts: total=4934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.403 filename0: (groupid=0, jobs=1): err= 0: pid=1946551: Mon Nov 4 12:40:41 2024 00:36:08.403 read: IOPS=482, BW=1931KiB/s (1977kB/s)(18.9MiB/10011msec) 00:36:08.403 slat (usec): min=5, max=114, avg=24.43, stdev=18.67 00:36:08.403 clat (usec): min=18749, max=40515, avg=32967.88, stdev=1853.89 00:36:08.403 lat (usec): min=18774, max=40573, avg=32992.32, stdev=1856.42 00:36:08.403 clat percentiles (usec): 00:36:08.403 | 1.00th=[24511], 5.00th=[31589], 10.00th=[32113], 20.00th=[32375], 00:36:08.403 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:36:08.403 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:36:08.403 | 99.00th=[38536], 99.50th=[39060], 99.90th=[40109], 99.95th=[40633], 00:36:08.403 | 99.99th=[40633] 00:36:08.403 bw ( KiB/s): min= 1792, max= 2048, per=4.05%, avg=1926.74, stdev=51.80, samples=19 00:36:08.403 iops : min= 448, max= 512, avg=481.68, stdev=12.95, samples=19 00:36:08.403 lat (msec) : 20=0.33%, 50=99.67% 00:36:08.403 cpu : usr=98.98%, sys=0.74%, ctx=17, majf=0, minf=49 00:36:08.403 IO depths : 1=3.7%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:36:08.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.403 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.403 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.403 filename0: (groupid=0, jobs=1): err= 0: pid=1946552: Mon Nov 4 12:40:41 2024 00:36:08.403 read: IOPS=491, BW=1964KiB/s (2012kB/s)(19.2MiB/10006msec) 00:36:08.403 slat (usec): min=5, max=128, avg=31.75, stdev=22.85 00:36:08.403 clat (usec): min=10409, max=52062, avg=32264.74, stdev=4102.10 00:36:08.403 lat (usec): min=10420, max=52074, avg=32296.48, stdev=4107.03 00:36:08.403 clat percentiles (usec): 00:36:08.403 | 1.00th=[19268], 5.00th=[22676], 10.00th=[30016], 20.00th=[31851], 00:36:08.403 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:36:08.403 | 70.00th=[33162], 80.00th=[33424], 90.00th=[34341], 95.00th=[34866], 00:36:08.403 | 99.00th=[50594], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:36:08.403 | 99.99th=[52167] 00:36:08.403 bw ( KiB/s): min= 1792, max= 2192, per=4.10%, avg=1951.79, stdev=88.69, samples=19 00:36:08.403 iops : min= 448, max= 548, avg=487.95, stdev=22.17, samples=19 00:36:08.403 lat (msec) : 20=2.48%, 50=96.50%, 100=1.02% 00:36:08.403 cpu : usr=99.06%, sys=0.64%, ctx=19, majf=0, minf=44 00:36:08.403 IO depths : 1=4.9%, 2=10.1%, 4=21.4%, 8=55.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:36:08.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.403 complete : 0=0.0%, 4=93.1%, 8=1.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.403 issued rwts: total=4914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.403 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.403 filename0: (groupid=0, jobs=1): err= 0: pid=1946553: Mon Nov 4 12:40:41 2024 00:36:08.403 read: IOPS=482, BW=1931KiB/s (1977kB/s)(18.9MiB/10011msec) 00:36:08.403 slat (usec): min=5, max=134, avg=22.51, stdev=23.36 00:36:08.403 clat (usec): min=18766, max=45579, avg=32960.52, stdev=1917.77 00:36:08.403 lat (usec): min=18791, max=45598, avg=32983.03, stdev=1919.61 00:36:08.403 clat percentiles (usec): 00:36:08.403 | 1.00th=[23725], 5.00th=[31589], 10.00th=[32113], 20.00th=[32375], 00:36:08.403 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:36:08.403 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:36:08.403 | 99.00th=[36439], 99.50th=[41157], 99.90th=[43254], 99.95th=[43779], 00:36:08.403 | 99.99th=[45351] 00:36:08.403 bw ( KiB/s): min= 1792, max= 2048, per=4.05%, avg=1926.74, stdev=64.07, samples=19 00:36:08.403 iops : min= 448, max= 512, avg=481.68, stdev=16.02, samples=19 00:36:08.403 lat (msec) : 20=0.33%, 50=99.67% 00:36:08.403 cpu : usr=99.17%, sys=0.50%, ctx=71, majf=0, minf=59 00:36:08.403 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:36:08.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.403 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.403 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.404 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.404 filename0: (groupid=0, jobs=1): err= 0: pid=1946554: Mon Nov 4 12:40:41 2024 00:36:08.404 read: IOPS=490, BW=1961KiB/s (2008kB/s)(19.2MiB/10022msec) 00:36:08.404 slat (usec): min=5, max=131, avg=24.74, stdev=21.20 00:36:08.404 clat (usec): min=2998, max=55309, avg=32433.68, stdev=4146.55 00:36:08.404 lat (usec): min=3016, max=55315, avg=32458.42, stdev=4147.31 00:36:08.404 clat percentiles (usec): 00:36:08.404 | 1.00th=[ 8094], 5.00th=[30802], 10.00th=[31851], 20.00th=[32113], 00:36:08.404 | 30.00th=[32375], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:36:08.404 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:36:08.404 | 99.00th=[36963], 99.50th=[40109], 99.90th=[55313], 99.95th=[55313], 00:36:08.404 | 99.99th=[55313] 00:36:08.404 bw ( KiB/s): min= 1792, max= 2536, per=4.12%, avg=1961.26, stdev=154.80, samples=19 00:36:08.404 iops : min= 448, max= 634, avg=490.32, stdev=38.70, samples=19 00:36:08.404 lat (msec) : 4=0.35%, 10=1.10%, 20=0.96%, 50=97.39%, 100=0.20% 00:36:08.404 cpu : usr=99.13%, sys=0.60%, ctx=15, majf=0, minf=46 00:36:08.404 IO depths : 1=5.6%, 2=11.6%, 4=24.2%, 8=51.6%, 16=7.0%, 32=0.0%, >=64=0.0% 00:36:08.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.404 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.404 issued rwts: total=4913,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.404 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.404 filename0: (groupid=0, jobs=1): err= 0: pid=1946555: Mon Nov 4 12:40:41 2024 00:36:08.404 read: IOPS=495, BW=1982KiB/s (2030kB/s)(19.4MiB/10001msec) 00:36:08.404 slat (nsec): min=5664, max=79417, avg=9798.41, stdev=6629.38 00:36:08.404 clat (usec): min=15080, max=54508, avg=32205.75, stdev=3775.24 00:36:08.404 lat (usec): min=15088, max=54547, avg=32215.55, stdev=3776.08 00:36:08.404 clat percentiles (usec): 00:36:08.404 | 1.00th=[19530], 5.00th=[22676], 10.00th=[27132], 20.00th=[32113], 00:36:08.404 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33424], 00:36:08.404 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:36:08.404 | 99.00th=[39584], 99.50th=[40633], 99.90th=[54264], 99.95th=[54264], 00:36:08.404 | 99.99th=[54264] 00:36:08.404 bw ( KiB/s): min= 1792, max= 2528, per=4.17%, avg=1985.68, stdev=192.14, samples=19 00:36:08.404 iops : min= 448, max= 632, avg=496.42, stdev=48.04, samples=19 00:36:08.404 lat (msec) : 20=1.33%, 50=98.22%, 100=0.44% 00:36:08.404 cpu : usr=98.38%, sys=1.26%, ctx=25, majf=0, minf=121 00:36:08.404 IO depths : 1=5.4%, 2=10.8%, 4=22.6%, 8=54.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:36:08.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.404 complete : 0=0.0%, 4=93.4%, 8=0.8%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.404 issued rwts: total=4956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.404 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.404 filename0: (groupid=0, jobs=1): err= 0: pid=1946556: Mon Nov 4 12:40:41 2024 00:36:08.404 read: IOPS=484, BW=1938KiB/s (1985kB/s)(18.9MiB/10002msec) 00:36:08.404 slat (usec): min=5, max=117, avg=21.77, stdev=19.30 00:36:08.404 clat (usec): min=11238, max=71194, avg=32827.59, stdev=3615.22 00:36:08.404 lat (usec): min=11263, max=71210, avg=32849.36, stdev=3614.95 00:36:08.404 clat percentiles (usec): 00:36:08.404 | 1.00th=[19268], 5.00th=[26870], 10.00th=[31589], 20.00th=[32113], 00:36:08.404 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:08.404 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:36:08.404 | 99.00th=[45351], 99.50th=[47449], 99.90th=[61080], 99.95th=[61080], 00:36:08.404 | 99.99th=[70779] 00:36:08.404 bw ( KiB/s): min= 1667, max= 2192, per=4.06%, avg=1932.53, stdev=105.94, samples=19 00:36:08.404 iops : min= 416, max= 548, avg=483.05, stdev=26.55, samples=19 00:36:08.404 lat (msec) : 20=1.44%, 50=98.23%, 100=0.33% 00:36:08.404 cpu : usr=98.98%, sys=0.70%, ctx=13, majf=0, minf=72 00:36:08.404 IO depths : 1=3.9%, 2=9.4%, 4=22.8%, 8=55.2%, 16=8.7%, 32=0.0%, >=64=0.0% 00:36:08.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.404 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.404 issued rwts: total=4846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.404 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.404 filename1: (groupid=0, jobs=1): err= 0: pid=1946557: Mon Nov 4 12:40:41 2024 00:36:08.404 read: IOPS=480, BW=1924KiB/s (1970kB/s)(18.8MiB/10010msec) 00:36:08.404 slat (usec): min=5, max=130, avg=37.31, stdev=22.00 00:36:08.404 clat (usec): min=16140, max=42611, avg=32934.42, stdev=1416.24 00:36:08.404 lat (usec): min=16146, max=42632, avg=32971.74, stdev=1416.68 00:36:08.404 clat percentiles (usec): 00:36:08.404 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[32375], 00:36:08.404 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:08.404 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:36:08.404 | 99.00th=[35914], 99.50th=[36963], 99.90th=[42730], 99.95th=[42730], 00:36:08.404 | 99.99th=[42730] 00:36:08.404 bw ( KiB/s): min= 1792, max= 2048, per=4.03%, avg=1919.79, stdev=60.35, samples=19 00:36:08.404 iops : min= 448, max= 512, avg=479.95, stdev=15.09, samples=19 00:36:08.404 lat (msec) : 20=0.29%, 50=99.71% 00:36:08.404 cpu : usr=98.96%, sys=0.68%, ctx=70, majf=0, minf=42 00:36:08.404 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:08.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.404 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.404 issued rwts: total=4814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.404 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.404 filename1: (groupid=0, jobs=1): err= 0: pid=1946558: Mon Nov 4 12:40:41 2024 00:36:08.404 read: IOPS=482, BW=1931KiB/s (1977kB/s)(18.9MiB/10011msec) 00:36:08.404 slat (usec): min=5, max=133, avg=35.21, stdev=24.17 00:36:08.404 clat (usec): min=18887, max=48264, avg=32848.87, stdev=1601.73 00:36:08.404 lat (usec): min=18907, max=48279, avg=32884.08, stdev=1602.70 00:36:08.404 clat percentiles (usec): 00:36:08.404 | 1.00th=[23987], 5.00th=[31589], 10.00th=[31851], 20.00th=[32375], 00:36:08.404 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:08.404 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:36:08.404 | 99.00th=[35914], 99.50th=[36439], 99.90th=[38536], 99.95th=[39584], 00:36:08.404 | 99.99th=[48497] 00:36:08.404 bw ( KiB/s): min= 1792, max= 2048, per=4.05%, avg=1926.74, stdev=67.11, samples=19 00:36:08.404 iops : min= 448, max= 512, avg=481.68, stdev=16.78, samples=19 00:36:08.404 lat (msec) : 20=0.33%, 50=99.67% 00:36:08.404 cpu : usr=99.25%, sys=0.45%, ctx=26, majf=0, minf=49 00:36:08.404 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:08.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.404 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.404 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.404 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.404 filename1: (groupid=0, jobs=1): err= 0: pid=1946559: Mon Nov 4 12:40:41 2024 00:36:08.404 read: IOPS=484, BW=1939KiB/s (1985kB/s)(18.9MiB/10002msec) 00:36:08.404 slat (nsec): min=5824, max=97149, avg=11157.24, stdev=7660.02 00:36:08.404 clat (usec): min=12301, max=44600, avg=32915.56, stdev=2203.98 00:36:08.404 lat (usec): min=12307, max=44608, avg=32926.72, stdev=2203.19 00:36:08.404 clat percentiles (usec): 00:36:08.404 | 1.00th=[21103], 5.00th=[31589], 10.00th=[32113], 20.00th=[32375], 00:36:08.404 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:36:08.404 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:36:08.404 | 99.00th=[35390], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:36:08.404 | 99.99th=[44827] 00:36:08.404 bw ( KiB/s): min= 1792, max= 2048, per=4.06%, avg=1933.47, stdev=58.73, samples=19 00:36:08.404 iops : min= 448, max= 512, avg=483.37, stdev=14.68, samples=19 00:36:08.404 lat (msec) : 20=0.66%, 50=99.34% 00:36:08.404 cpu : usr=98.79%, sys=0.78%, ctx=137, majf=0, minf=83 00:36:08.404 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:08.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.404 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.404 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.404 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.404 filename1: (groupid=0, jobs=1): err= 0: pid=1946560: Mon Nov 4 12:40:41 2024 00:36:08.404 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10005msec) 00:36:08.404 slat (usec): min=5, max=140, avg=31.95, stdev=24.62 00:36:08.404 clat (usec): min=8648, max=57621, avg=32324.26, stdev=4386.62 00:36:08.404 lat (usec): min=8655, max=57637, avg=32356.21, stdev=4391.50 00:36:08.404 clat percentiles (usec): 00:36:08.404 | 1.00th=[17171], 5.00th=[22414], 10.00th=[28967], 20.00th=[31851], 00:36:08.404 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:08.404 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:08.404 | 99.00th=[47449], 99.50th=[52691], 99.90th=[57410], 99.95th=[57410], 00:36:08.404 | 99.99th=[57410] 00:36:08.404 bw ( KiB/s): min= 1667, max= 2224, per=4.10%, avg=1951.95, stdev=117.23, samples=19 00:36:08.404 iops : min= 416, max= 556, avg=487.95, stdev=29.41, samples=19 00:36:08.404 lat (msec) : 10=0.33%, 20=2.32%, 50=96.82%, 100=0.53% 00:36:08.404 cpu : usr=97.99%, sys=1.20%, ctx=215, majf=0, minf=69 00:36:08.404 IO depths : 1=3.1%, 2=8.3%, 4=21.4%, 8=57.6%, 16=9.7%, 32=0.0%, >=64=0.0% 00:36:08.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.404 complete : 0=0.0%, 4=93.3%, 8=1.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.404 issued rwts: total=4908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.404 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.404 filename1: (groupid=0, jobs=1): err= 0: pid=1946561: Mon Nov 4 12:40:41 2024 00:36:08.404 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10016msec) 00:36:08.404 slat (usec): min=5, max=143, avg=39.20, stdev=24.62 00:36:08.404 clat (usec): min=20670, max=45059, avg=32918.17, stdev=1597.37 00:36:08.404 lat (usec): min=20680, max=45066, avg=32957.37, stdev=1599.36 00:36:08.404 clat percentiles (usec): 00:36:08.404 | 1.00th=[27395], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:36:08.404 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:36:08.404 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:36:08.404 | 99.00th=[38536], 99.50th=[41681], 99.90th=[43779], 99.95th=[44827], 00:36:08.404 | 99.99th=[44827] 00:36:08.404 bw ( KiB/s): min= 1792, max= 2048, per=4.03%, avg=1920.00, stdev=58.73, samples=20 00:36:08.405 iops : min= 448, max= 512, avg=480.00, stdev=14.68, samples=20 00:36:08.405 lat (msec) : 50=100.00% 00:36:08.405 cpu : usr=99.24%, sys=0.46%, ctx=15, majf=0, minf=50 00:36:08.405 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:36:08.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.405 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.405 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.405 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.405 filename1: (groupid=0, jobs=1): err= 0: pid=1946562: Mon Nov 4 12:40:41 2024 00:36:08.405 read: IOPS=499, BW=1999KiB/s (2047kB/s)(19.5MiB/10011msec) 00:36:08.405 slat (usec): min=5, max=131, avg=11.51, stdev=12.22 00:36:08.405 clat (usec): min=11127, max=46340, avg=31917.67, stdev=4156.89 00:36:08.405 lat (usec): min=11138, max=46347, avg=31929.18, stdev=4158.24 00:36:08.405 clat percentiles (usec): 00:36:08.405 | 1.00th=[17433], 5.00th=[22152], 10.00th=[25035], 20.00th=[32113], 00:36:08.405 | 30.00th=[32375], 40.00th=[32637], 50.00th=[33162], 60.00th=[33162], 00:36:08.405 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:36:08.405 | 99.00th=[40633], 99.50th=[41157], 99.90th=[43254], 99.95th=[44303], 00:36:08.405 | 99.99th=[46400] 00:36:08.405 bw ( KiB/s): min= 1792, max= 2736, per=4.20%, avg=1999.16, stdev=227.87, samples=19 00:36:08.405 iops : min= 448, max= 684, avg=499.79, stdev=56.97, samples=19 00:36:08.405 lat (msec) : 20=3.42%, 50=96.58% 00:36:08.405 cpu : usr=99.16%, sys=0.54%, ctx=19, majf=0, minf=53 00:36:08.405 IO depths : 1=4.7%, 2=10.2%, 4=22.6%, 8=54.7%, 16=7.8%, 32=0.0%, >=64=0.0% 00:36:08.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.405 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.405 issued rwts: total=5004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.405 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.405 filename1: (groupid=0, jobs=1): err= 0: pid=1946563: Mon Nov 4 12:40:41 2024 00:36:08.405 read: IOPS=574, BW=2296KiB/s (2351kB/s)(22.5MiB/10025msec) 00:36:08.405 slat (nsec): min=5847, max=76765, avg=11575.58, stdev=6463.77 00:36:08.405 clat (usec): min=3634, max=53499, avg=27785.69, stdev=6855.64 00:36:08.405 lat (usec): min=3651, max=53516, avg=27797.27, stdev=6856.55 00:36:08.405 clat percentiles (usec): 00:36:08.405 | 1.00th=[ 5735], 5.00th=[18220], 10.00th=[18482], 20.00th=[19792], 00:36:08.405 | 30.00th=[21365], 40.00th=[28967], 50.00th=[31851], 60.00th=[32375], 00:36:08.405 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:36:08.405 | 99.00th=[35390], 99.50th=[36963], 99.90th=[52167], 99.95th=[52691], 00:36:08.405 | 99.99th=[53740] 00:36:08.405 bw ( KiB/s): min= 1904, max= 3432, per=4.82%, avg=2294.95, stdev=485.79, samples=20 00:36:08.405 iops : min= 476, max= 858, avg=573.70, stdev=121.47, samples=20 00:36:08.405 lat (msec) : 4=0.16%, 10=1.65%, 20=19.13%, 50=78.89%, 100=0.17% 00:36:08.405 cpu : usr=98.84%, sys=0.74%, ctx=115, majf=0, minf=56 00:36:08.405 IO depths : 1=3.3%, 2=6.8%, 4=16.6%, 8=64.1%, 16=9.3%, 32=0.0%, >=64=0.0% 00:36:08.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.405 complete : 0=0.0%, 4=91.7%, 8=2.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.405 issued rwts: total=5755,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.405 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.405 filename1: (groupid=0, jobs=1): err= 0: pid=1946564: Mon Nov 4 12:40:41 2024 00:36:08.405 read: IOPS=481, BW=1926KiB/s (1972kB/s)(18.8MiB/10004msec) 00:36:08.405 slat (nsec): min=5739, max=75346, avg=11195.82, stdev=6435.56 00:36:08.405 clat (usec): min=8689, max=84045, avg=33136.62, stdev=3111.80 00:36:08.405 lat (usec): min=8700, max=84061, avg=33147.82, stdev=3111.77 00:36:08.405 clat percentiles (usec): 00:36:08.405 | 1.00th=[18482], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:08.405 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:36:08.405 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:36:08.405 | 99.00th=[36439], 99.50th=[50070], 99.90th=[62653], 99.95th=[62653], 00:36:08.405 | 99.99th=[84411] 00:36:08.405 bw ( KiB/s): min= 1667, max= 2048, per=4.02%, avg=1913.21, stdev=78.99, samples=19 00:36:08.405 iops : min= 416, max= 512, avg=478.26, stdev=19.88, samples=19 00:36:08.405 lat (msec) : 10=0.33%, 20=0.87%, 50=98.30%, 100=0.50% 00:36:08.405 cpu : usr=98.99%, sys=0.64%, ctx=60, majf=0, minf=75 00:36:08.405 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:08.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.405 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.405 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.405 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.405 filename2: (groupid=0, jobs=1): err= 0: pid=1946565: Mon Nov 4 12:40:41 2024 00:36:08.405 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10005msec) 00:36:08.405 slat (usec): min=5, max=135, avg=20.04, stdev=20.93 00:36:08.405 clat (usec): min=8828, max=62834, avg=32566.53, stdev=4574.76 00:36:08.405 lat (usec): min=8846, max=62849, avg=32586.57, stdev=4577.34 00:36:08.405 clat percentiles (usec): 00:36:08.405 | 1.00th=[18744], 5.00th=[22414], 10.00th=[30802], 20.00th=[32113], 00:36:08.405 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:08.405 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:36:08.405 | 99.00th=[51119], 99.50th=[60556], 99.90th=[62653], 99.95th=[62653], 00:36:08.405 | 99.99th=[62653] 00:36:08.405 bw ( KiB/s): min= 1715, max= 2112, per=4.08%, avg=1942.68, stdev=80.89, samples=19 00:36:08.405 iops : min= 428, max= 528, avg=485.63, stdev=20.34, samples=19 00:36:08.405 lat (msec) : 10=0.06%, 20=2.72%, 50=95.79%, 100=1.43% 00:36:08.405 cpu : usr=98.26%, sys=1.01%, ctx=279, majf=0, minf=65 00:36:08.405 IO depths : 1=4.3%, 2=9.5%, 4=21.5%, 8=56.2%, 16=8.5%, 32=0.0%, >=64=0.0% 00:36:08.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.405 complete : 0=0.0%, 4=93.2%, 8=1.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.405 issued rwts: total=4888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.405 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.405 filename2: (groupid=0, jobs=1): err= 0: pid=1946566: Mon Nov 4 12:40:41 2024 00:36:08.405 read: IOPS=481, BW=1926KiB/s (1972kB/s)(18.8MiB/10004msec) 00:36:08.405 slat (usec): min=5, max=117, avg=35.42, stdev=18.00 00:36:08.405 clat (usec): min=8279, max=57018, avg=32901.21, stdev=2277.49 00:36:08.405 lat (usec): min=8285, max=57035, avg=32936.63, stdev=2278.68 00:36:08.405 clat percentiles (usec): 00:36:08.405 | 1.00th=[25822], 5.00th=[31851], 10.00th=[31851], 20.00th=[32375], 00:36:08.405 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:08.405 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:36:08.405 | 99.00th=[35914], 99.50th=[36963], 99.90th=[56886], 99.95th=[56886], 00:36:08.405 | 99.99th=[56886] 00:36:08.405 bw ( KiB/s): min= 1667, max= 2048, per=4.02%, avg=1913.21, stdev=89.77, samples=19 00:36:08.405 iops : min= 416, max= 512, avg=478.26, stdev=22.56, samples=19 00:36:08.405 lat (msec) : 10=0.04%, 20=0.58%, 50=99.04%, 100=0.33% 00:36:08.405 cpu : usr=98.83%, sys=0.80%, ctx=62, majf=0, minf=50 00:36:08.405 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:08.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.405 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.405 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.405 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.405 filename2: (groupid=0, jobs=1): err= 0: pid=1946567: Mon Nov 4 12:40:41 2024 00:36:08.405 read: IOPS=481, BW=1925KiB/s (1972kB/s)(18.8MiB/10005msec) 00:36:08.405 slat (usec): min=5, max=113, avg=31.33, stdev=15.29 00:36:08.405 clat (usec): min=8311, max=58031, avg=32968.74, stdev=2317.82 00:36:08.405 lat (usec): min=8318, max=58046, avg=33000.07, stdev=2318.62 00:36:08.405 clat percentiles (usec): 00:36:08.405 | 1.00th=[26608], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:08.405 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:08.405 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:36:08.405 | 99.00th=[35914], 99.50th=[36963], 99.90th=[57934], 99.95th=[57934], 00:36:08.405 | 99.99th=[57934] 00:36:08.405 bw ( KiB/s): min= 1664, max= 2048, per=4.02%, avg=1913.05, stdev=90.23, samples=19 00:36:08.405 iops : min= 416, max= 512, avg=478.26, stdev=22.56, samples=19 00:36:08.405 lat (msec) : 10=0.04%, 20=0.58%, 50=99.04%, 100=0.33% 00:36:08.405 cpu : usr=98.72%, sys=0.87%, ctx=123, majf=0, minf=40 00:36:08.405 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:08.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.405 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.405 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.405 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.406 filename2: (groupid=0, jobs=1): err= 0: pid=1946568: Mon Nov 4 12:40:41 2024 00:36:08.406 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10005msec) 00:36:08.406 slat (usec): min=5, max=121, avg=35.49, stdev=18.69 00:36:08.406 clat (usec): min=12092, max=57870, avg=32909.84, stdev=2297.27 00:36:08.406 lat (usec): min=12098, max=57887, avg=32945.33, stdev=2298.58 00:36:08.406 clat percentiles (usec): 00:36:08.406 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[32375], 00:36:08.406 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:08.406 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:36:08.406 | 99.00th=[35914], 99.50th=[36963], 99.90th=[57934], 99.95th=[57934], 00:36:08.406 | 99.99th=[57934] 00:36:08.406 bw ( KiB/s): min= 1664, max= 2048, per=4.02%, avg=1913.05, stdev=90.23, samples=19 00:36:08.406 iops : min= 416, max= 512, avg=478.26, stdev=22.56, samples=19 00:36:08.406 lat (msec) : 20=0.66%, 50=98.96%, 100=0.37% 00:36:08.406 cpu : usr=98.64%, sys=0.94%, ctx=116, majf=0, minf=40 00:36:08.406 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:08.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.406 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.406 issued rwts: total=4814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.406 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.406 filename2: (groupid=0, jobs=1): err= 0: pid=1946569: Mon Nov 4 12:40:41 2024 00:36:08.406 read: IOPS=485, BW=1943KiB/s (1989kB/s)(19.0MiB/10011msec) 00:36:08.406 slat (usec): min=5, max=116, avg=28.86, stdev=20.28 00:36:08.406 clat (usec): min=18222, max=55845, avg=32690.05, stdev=4212.06 00:36:08.406 lat (usec): min=18228, max=55852, avg=32718.91, stdev=4214.64 00:36:08.406 clat percentiles (usec): 00:36:08.406 | 1.00th=[19792], 5.00th=[23987], 10.00th=[28443], 20.00th=[32113], 00:36:08.406 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:08.406 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[36439], 00:36:08.406 | 99.00th=[52167], 99.50th=[52691], 99.90th=[55837], 99.95th=[55837], 00:36:08.406 | 99.99th=[55837] 00:36:08.406 bw ( KiB/s): min= 1792, max= 2208, per=4.07%, avg=1939.37, stdev=82.38, samples=19 00:36:08.406 iops : min= 448, max= 552, avg=484.84, stdev=20.59, samples=19 00:36:08.406 lat (msec) : 20=1.03%, 50=97.90%, 100=1.07% 00:36:08.406 cpu : usr=98.89%, sys=0.75%, ctx=79, majf=0, minf=57 00:36:08.406 IO depths : 1=4.8%, 2=9.6%, 4=20.3%, 8=57.4%, 16=8.0%, 32=0.0%, >=64=0.0% 00:36:08.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.406 complete : 0=0.0%, 4=92.8%, 8=1.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.406 issued rwts: total=4862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.406 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.406 filename2: (groupid=0, jobs=1): err= 0: pid=1946570: Mon Nov 4 12:40:41 2024 00:36:08.406 read: IOPS=498, BW=1994KiB/s (2041kB/s)(19.5MiB/10004msec) 00:36:08.406 slat (nsec): min=5672, max=94815, avg=12845.53, stdev=10581.06 00:36:08.406 clat (usec): min=8742, max=75251, avg=32030.66, stdev=5296.11 00:36:08.406 lat (usec): min=8758, max=75268, avg=32043.51, stdev=5296.95 00:36:08.406 clat percentiles (usec): 00:36:08.406 | 1.00th=[17433], 5.00th=[20317], 10.00th=[25822], 20.00th=[31065], 00:36:08.406 | 30.00th=[32113], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:08.406 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34866], 95.00th=[38011], 00:36:08.406 | 99.00th=[50594], 99.50th=[51119], 99.90th=[57410], 99.95th=[57410], 00:36:08.406 | 99.99th=[74974] 00:36:08.406 bw ( KiB/s): min= 1920, max= 2144, per=4.17%, avg=1986.47, stdev=63.75, samples=19 00:36:08.406 iops : min= 480, max= 536, avg=496.58, stdev=15.95, samples=19 00:36:08.406 lat (msec) : 10=0.08%, 20=4.27%, 50=94.44%, 100=1.20% 00:36:08.406 cpu : usr=98.61%, sys=0.96%, ctx=60, majf=0, minf=59 00:36:08.406 IO depths : 1=0.2%, 2=1.1%, 4=5.4%, 8=77.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:36:08.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.406 complete : 0=0.0%, 4=90.0%, 8=7.9%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.406 issued rwts: total=4986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.406 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.406 filename2: (groupid=0, jobs=1): err= 0: pid=1946571: Mon Nov 4 12:40:41 2024 00:36:08.406 read: IOPS=482, BW=1931KiB/s (1977kB/s)(18.9MiB/10011msec) 00:36:08.406 slat (usec): min=5, max=138, avg=39.43, stdev=24.21 00:36:08.406 clat (usec): min=18857, max=36448, avg=32761.68, stdev=1502.81 00:36:08.406 lat (usec): min=18892, max=36457, avg=32801.11, stdev=1505.94 00:36:08.406 clat percentiles (usec): 00:36:08.406 | 1.00th=[24773], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:36:08.406 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:36:08.406 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:08.406 | 99.00th=[34866], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:36:08.406 | 99.99th=[36439] 00:36:08.406 bw ( KiB/s): min= 1792, max= 2048, per=4.05%, avg=1926.40, stdev=50.44, samples=20 00:36:08.406 iops : min= 448, max= 512, avg=481.60, stdev=12.61, samples=20 00:36:08.406 lat (msec) : 20=0.33%, 50=99.67% 00:36:08.406 cpu : usr=99.16%, sys=0.54%, ctx=38, majf=0, minf=49 00:36:08.406 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:08.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.406 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.406 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.406 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.406 filename2: (groupid=0, jobs=1): err= 0: pid=1946572: Mon Nov 4 12:40:41 2024 00:36:08.406 read: IOPS=486, BW=1946KiB/s (1993kB/s)(19.0MiB/10013msec) 00:36:08.406 slat (usec): min=5, max=106, avg=20.19, stdev=17.13 00:36:08.406 clat (usec): min=12546, max=62524, avg=32717.01, stdev=3559.68 00:36:08.406 lat (usec): min=12552, max=62544, avg=32737.20, stdev=3559.76 00:36:08.406 clat percentiles (usec): 00:36:08.406 | 1.00th=[20579], 5.00th=[27132], 10.00th=[31851], 20.00th=[32113], 00:36:08.406 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:36:08.406 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:36:08.406 | 99.00th=[40109], 99.50th=[49546], 99.90th=[62653], 99.95th=[62653], 00:36:08.406 | 99.99th=[62653] 00:36:08.406 bw ( KiB/s): min= 1660, max= 2448, per=4.08%, avg=1941.95, stdev=142.71, samples=20 00:36:08.406 iops : min= 415, max= 612, avg=485.45, stdev=35.65, samples=20 00:36:08.406 lat (msec) : 20=0.86%, 50=98.65%, 100=0.49% 00:36:08.406 cpu : usr=98.81%, sys=0.72%, ctx=81, majf=0, minf=83 00:36:08.406 IO depths : 1=5.0%, 2=10.8%, 4=23.5%, 8=53.1%, 16=7.6%, 32=0.0%, >=64=0.0% 00:36:08.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.406 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.406 issued rwts: total=4872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.406 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:08.406 00:36:08.406 Run status group 0 (all jobs): 00:36:08.406 READ: bw=46.5MiB/s (48.7MB/s), 1923KiB/s-2547KiB/s (1969kB/s-2608kB/s), io=466MiB (489MB), run=10001-10025msec 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:08.406 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:08.407 bdev_null0 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:08.407 [2024-11-04 12:40:41.318327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:08.407 bdev_null1 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:08.407 { 00:36:08.407 "params": { 00:36:08.407 "name": "Nvme$subsystem", 00:36:08.407 "trtype": "$TEST_TRANSPORT", 00:36:08.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:08.407 "adrfam": "ipv4", 00:36:08.407 "trsvcid": "$NVMF_PORT", 00:36:08.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:08.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:08.407 "hdgst": ${hdgst:-false}, 00:36:08.407 "ddgst": ${ddgst:-false} 00:36:08.407 }, 00:36:08.407 "method": "bdev_nvme_attach_controller" 00:36:08.407 } 00:36:08.407 EOF 00:36:08.407 )") 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:08.407 { 00:36:08.407 "params": { 00:36:08.407 "name": "Nvme$subsystem", 00:36:08.407 "trtype": "$TEST_TRANSPORT", 00:36:08.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:08.407 "adrfam": "ipv4", 00:36:08.407 "trsvcid": "$NVMF_PORT", 00:36:08.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:08.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:08.407 "hdgst": ${hdgst:-false}, 00:36:08.407 "ddgst": ${ddgst:-false} 00:36:08.407 }, 00:36:08.407 "method": "bdev_nvme_attach_controller" 00:36:08.407 } 00:36:08.407 EOF 00:36:08.407 )") 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:08.407 "params": { 00:36:08.407 "name": "Nvme0", 00:36:08.407 "trtype": "tcp", 00:36:08.407 "traddr": "10.0.0.2", 00:36:08.407 "adrfam": "ipv4", 00:36:08.407 "trsvcid": "4420", 00:36:08.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:08.407 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:08.407 "hdgst": false, 00:36:08.407 "ddgst": false 00:36:08.407 }, 00:36:08.407 "method": "bdev_nvme_attach_controller" 00:36:08.407 },{ 00:36:08.407 "params": { 00:36:08.407 "name": "Nvme1", 00:36:08.407 "trtype": "tcp", 00:36:08.407 "traddr": "10.0.0.2", 00:36:08.407 "adrfam": "ipv4", 00:36:08.407 "trsvcid": "4420", 00:36:08.407 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:08.407 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:08.407 "hdgst": false, 00:36:08.407 "ddgst": false 00:36:08.407 }, 00:36:08.407 "method": "bdev_nvme_attach_controller" 00:36:08.407 }' 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:08.407 12:40:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:08.407 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:08.407 ... 00:36:08.407 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:08.407 ... 00:36:08.407 fio-3.35 00:36:08.407 Starting 4 threads 00:36:13.700 00:36:13.700 filename0: (groupid=0, jobs=1): err= 0: pid=1948779: Mon Nov 4 12:40:47 2024 00:36:13.700 read: IOPS=2085, BW=16.3MiB/s (17.1MB/s)(81.5MiB/5002msec) 00:36:13.700 slat (nsec): min=5654, max=32668, avg=6309.19, stdev=1766.44 00:36:13.700 clat (usec): min=2410, max=45237, avg=3819.38, stdev=1273.45 00:36:13.700 lat (usec): min=2416, max=45270, avg=3825.69, stdev=1273.62 00:36:13.700 clat percentiles (usec): 00:36:13.700 | 1.00th=[ 2802], 5.00th=[ 3130], 10.00th=[ 3294], 20.00th=[ 3458], 00:36:13.700 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3785], 00:36:13.700 | 70.00th=[ 3818], 80.00th=[ 3982], 90.00th=[ 4359], 95.00th=[ 5211], 00:36:13.700 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 6325], 99.95th=[45351], 00:36:13.700 | 99.99th=[45351] 00:36:13.700 bw ( KiB/s): min=15472, max=17040, per=24.79%, avg=16680.00, stdev=441.62, samples=10 00:36:13.700 iops : min= 1934, max= 2130, avg=2085.00, stdev=55.20, samples=10 00:36:13.700 lat (msec) : 4=80.65%, 10=19.27%, 50=0.08% 00:36:13.700 cpu : usr=96.98%, sys=2.80%, ctx=8, majf=0, minf=52 00:36:13.700 IO depths : 1=0.1%, 2=0.1%, 4=72.5%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:13.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.700 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.700 issued rwts: total=10430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.700 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:13.700 filename0: (groupid=0, jobs=1): err= 0: pid=1948781: Mon Nov 4 12:40:47 2024 00:36:13.700 read: IOPS=2052, BW=16.0MiB/s (16.8MB/s)(80.2MiB/5002msec) 00:36:13.700 slat (nsec): min=5651, max=28176, avg=6225.09, stdev=1510.24 00:36:13.700 clat (usec): min=1231, max=6621, avg=3880.66, stdev=587.25 00:36:13.700 lat (usec): min=1237, max=6627, avg=3886.88, stdev=587.09 00:36:13.700 clat percentiles (usec): 00:36:13.700 | 1.00th=[ 3064], 5.00th=[ 3326], 10.00th=[ 3458], 20.00th=[ 3523], 00:36:13.700 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3785], 00:36:13.700 | 70.00th=[ 3851], 80.00th=[ 4047], 90.00th=[ 4424], 95.00th=[ 5473], 00:36:13.700 | 99.00th=[ 5932], 99.50th=[ 6063], 99.90th=[ 6390], 99.95th=[ 6456], 00:36:13.700 | 99.99th=[ 6587] 00:36:13.700 bw ( KiB/s): min=16192, max=16673, per=24.40%, avg=16412.90, stdev=154.36, samples=10 00:36:13.700 iops : min= 2024, max= 2084, avg=2051.80, stdev=19.17, samples=10 00:36:13.700 lat (msec) : 2=0.03%, 4=76.43%, 10=23.54% 00:36:13.700 cpu : usr=96.94%, sys=2.68%, ctx=155, majf=0, minf=31 00:36:13.700 IO depths : 1=0.1%, 2=0.1%, 4=73.7%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:13.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.700 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.700 issued rwts: total=10265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.700 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:13.700 filename1: (groupid=0, jobs=1): err= 0: pid=1948782: Mon Nov 4 12:40:47 2024 00:36:13.700 read: IOPS=2021, BW=15.8MiB/s (16.6MB/s)(79.0MiB/5002msec) 00:36:13.700 slat (nsec): min=5655, max=70169, avg=6371.36, stdev=2115.92 00:36:13.700 clat (usec): min=1786, max=6547, avg=3939.91, stdev=688.42 00:36:13.700 lat (usec): min=1792, max=6553, avg=3946.28, stdev=688.22 00:36:13.700 clat percentiles (usec): 00:36:13.700 | 1.00th=[ 2999], 5.00th=[ 3261], 10.00th=[ 3392], 20.00th=[ 3490], 00:36:13.700 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3785], 00:36:13.700 | 70.00th=[ 3916], 80.00th=[ 4080], 90.00th=[ 5276], 95.00th=[ 5604], 00:36:13.700 | 99.00th=[ 5997], 99.50th=[ 6063], 99.90th=[ 6325], 99.95th=[ 6390], 00:36:13.700 | 99.99th=[ 6521] 00:36:13.700 bw ( KiB/s): min=15920, max=16608, per=24.04%, avg=16174.40, stdev=187.87, samples=10 00:36:13.700 iops : min= 1990, max= 2076, avg=2021.80, stdev=23.48, samples=10 00:36:13.700 lat (msec) : 2=0.03%, 4=73.66%, 10=26.32% 00:36:13.700 cpu : usr=97.20%, sys=2.58%, ctx=5, majf=0, minf=50 00:36:13.700 IO depths : 1=0.1%, 2=0.1%, 4=73.0%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:13.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.701 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.701 issued rwts: total=10112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.701 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:13.701 filename1: (groupid=0, jobs=1): err= 0: pid=1948783: Mon Nov 4 12:40:47 2024 00:36:13.701 read: IOPS=2250, BW=17.6MiB/s (18.4MB/s)(87.9MiB/5002msec) 00:36:13.701 slat (nsec): min=5678, max=72236, avg=6467.40, stdev=2080.37 00:36:13.701 clat (usec): min=1649, max=5498, avg=3538.80, stdev=532.96 00:36:13.701 lat (usec): min=1656, max=5504, avg=3545.27, stdev=532.84 00:36:13.701 clat percentiles (usec): 00:36:13.701 | 1.00th=[ 2573], 5.00th=[ 2802], 10.00th=[ 2900], 20.00th=[ 3130], 00:36:13.701 | 30.00th=[ 3261], 40.00th=[ 3392], 50.00th=[ 3556], 60.00th=[ 3556], 00:36:13.701 | 70.00th=[ 3752], 80.00th=[ 3785], 90.00th=[ 4146], 95.00th=[ 4817], 00:36:13.701 | 99.00th=[ 5080], 99.50th=[ 5145], 99.90th=[ 5276], 99.95th=[ 5342], 00:36:13.701 | 99.99th=[ 5473] 00:36:13.701 bw ( KiB/s): min=17648, max=18272, per=26.76%, avg=18001.60, stdev=167.89, samples=10 00:36:13.701 iops : min= 2206, max= 2284, avg=2250.20, stdev=20.99, samples=10 00:36:13.701 lat (msec) : 2=0.09%, 4=86.55%, 10=13.36% 00:36:13.701 cpu : usr=97.78%, sys=1.96%, ctx=7, majf=0, minf=53 00:36:13.701 IO depths : 1=0.1%, 2=0.2%, 4=67.8%, 8=31.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:13.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.701 complete : 0=0.0%, 4=96.2%, 8=3.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:13.701 issued rwts: total=11256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:13.701 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:13.701 00:36:13.701 Run status group 0 (all jobs): 00:36:13.701 READ: bw=65.7MiB/s (68.9MB/s), 15.8MiB/s-17.6MiB/s (16.6MB/s-18.4MB/s), io=329MiB (345MB), run=5002-5002msec 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.701 00:36:13.701 real 0m24.409s 00:36:13.701 user 5m16.756s 00:36:13.701 sys 0m4.771s 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:13.701 12:40:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:13.701 ************************************ 00:36:13.701 END TEST fio_dif_rand_params 00:36:13.701 ************************************ 00:36:13.701 12:40:47 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:13.701 12:40:47 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:13.701 12:40:47 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:13.701 12:40:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:13.701 ************************************ 00:36:13.701 START TEST fio_dif_digest 00:36:13.701 ************************************ 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:13.701 bdev_null0 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:13.701 [2024-11-04 12:40:47.902687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:13.701 { 00:36:13.701 "params": { 00:36:13.701 "name": "Nvme$subsystem", 00:36:13.701 "trtype": "$TEST_TRANSPORT", 00:36:13.701 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.701 "adrfam": "ipv4", 00:36:13.701 "trsvcid": "$NVMF_PORT", 00:36:13.701 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.701 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.701 "hdgst": ${hdgst:-false}, 00:36:13.701 "ddgst": ${ddgst:-false} 00:36:13.701 }, 00:36:13.701 "method": "bdev_nvme_attach_controller" 00:36:13.701 } 00:36:13.701 EOF 00:36:13.701 )") 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:36:13.701 12:40:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:13.701 "params": { 00:36:13.701 "name": "Nvme0", 00:36:13.702 "trtype": "tcp", 00:36:13.702 "traddr": "10.0.0.2", 00:36:13.702 "adrfam": "ipv4", 00:36:13.702 "trsvcid": "4420", 00:36:13.702 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:13.702 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:13.702 "hdgst": true, 00:36:13.702 "ddgst": true 00:36:13.702 }, 00:36:13.702 "method": "bdev_nvme_attach_controller" 00:36:13.702 }' 00:36:13.702 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:13.702 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:13.702 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:13.702 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:13.702 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:13.702 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:13.702 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:13.702 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:13.702 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:13.702 12:40:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:13.964 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:13.964 ... 00:36:13.964 fio-3.35 00:36:13.964 Starting 3 threads 00:36:26.220 00:36:26.220 filename0: (groupid=0, jobs=1): err= 0: pid=1950275: Mon Nov 4 12:40:58 2024 00:36:26.220 read: IOPS=240, BW=30.1MiB/s (31.6MB/s)(303MiB/10047msec) 00:36:26.220 slat (nsec): min=5882, max=32079, avg=6792.29, stdev=1091.93 00:36:26.220 clat (usec): min=7981, max=54087, avg=12428.16, stdev=1496.52 00:36:26.220 lat (usec): min=7987, max=54093, avg=12434.96, stdev=1496.49 00:36:26.220 clat percentiles (usec): 00:36:26.220 | 1.00th=[ 9110], 5.00th=[10814], 10.00th=[11207], 20.00th=[11731], 00:36:26.220 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12649], 00:36:26.220 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[13829], 00:36:26.220 | 99.00th=[14615], 99.50th=[15008], 99.90th=[16057], 99.95th=[49546], 00:36:26.220 | 99.99th=[54264] 00:36:26.220 bw ( KiB/s): min=29952, max=32000, per=36.43%, avg=30950.40, stdev=511.33, samples=20 00:36:26.220 iops : min= 234, max= 250, avg=241.80, stdev= 3.99, samples=20 00:36:26.220 lat (msec) : 10=1.82%, 20=98.10%, 50=0.04%, 100=0.04% 00:36:26.220 cpu : usr=94.69%, sys=5.07%, ctx=18, majf=0, minf=82 00:36:26.220 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:26.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.220 issued rwts: total=2420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:26.220 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:26.220 filename0: (groupid=0, jobs=1): err= 0: pid=1950276: Mon Nov 4 12:40:58 2024 00:36:26.220 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(266MiB/10006msec) 00:36:26.220 slat (nsec): min=5973, max=33505, avg=7208.77, stdev=1493.03 00:36:26.220 clat (usec): min=6938, max=54693, avg=14122.63, stdev=2348.13 00:36:26.220 lat (usec): min=6945, max=54699, avg=14129.84, stdev=2348.12 00:36:26.220 clat percentiles (usec): 00:36:26.220 | 1.00th=[11469], 5.00th=[12518], 10.00th=[12780], 20.00th=[13173], 00:36:26.220 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14222], 00:36:26.220 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15401], 95.00th=[15795], 00:36:26.220 | 99.00th=[16712], 99.50th=[17171], 99.90th=[53216], 99.95th=[53740], 00:36:26.220 | 99.99th=[54789] 00:36:26.220 bw ( KiB/s): min=25088, max=28672, per=31.97%, avg=27162.95, stdev=865.60, samples=19 00:36:26.220 iops : min= 196, max= 224, avg=212.21, stdev= 6.76, samples=19 00:36:26.220 lat (msec) : 10=0.38%, 20=99.34%, 100=0.28% 00:36:26.220 cpu : usr=94.02%, sys=4.99%, ctx=665, majf=0, minf=188 00:36:26.220 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:26.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.220 issued rwts: total=2124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:26.220 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:26.220 filename0: (groupid=0, jobs=1): err= 0: pid=1950277: Mon Nov 4 12:40:58 2024 00:36:26.220 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(266MiB/10045msec) 00:36:26.220 slat (nsec): min=5876, max=32889, avg=6867.41, stdev=1255.85 00:36:26.220 clat (usec): min=9139, max=57274, avg=14159.38, stdev=2223.20 00:36:26.220 lat (usec): min=9146, max=57281, avg=14166.25, stdev=2223.25 00:36:26.220 clat percentiles (usec): 00:36:26.220 | 1.00th=[10945], 5.00th=[12387], 10.00th=[12780], 20.00th=[13304], 00:36:26.221 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:36:26.221 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15401], 95.00th=[15795], 00:36:26.221 | 99.00th=[16909], 99.50th=[17433], 99.90th=[55837], 99.95th=[56886], 00:36:26.221 | 99.99th=[57410] 00:36:26.221 bw ( KiB/s): min=24320, max=28672, per=31.97%, avg=27161.60, stdev=950.27, samples=20 00:36:26.221 iops : min= 190, max= 224, avg=212.20, stdev= 7.42, samples=20 00:36:26.221 lat (msec) : 10=0.42%, 20=99.20%, 50=0.24%, 100=0.14% 00:36:26.221 cpu : usr=95.52%, sys=4.26%, ctx=17, majf=0, minf=141 00:36:26.221 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:26.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:26.221 issued rwts: total=2124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:26.221 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:26.221 00:36:26.221 Run status group 0 (all jobs): 00:36:26.221 READ: bw=83.0MiB/s (87.0MB/s), 26.4MiB/s-30.1MiB/s (27.7MB/s-31.6MB/s), io=834MiB (874MB), run=10006-10047msec 00:36:26.221 12:40:58 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:26.221 12:40:58 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:26.221 12:40:58 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:26.221 12:40:58 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:26.221 12:40:58 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:26.221 12:40:58 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:26.221 12:40:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.221 12:40:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:26.221 12:40:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.221 12:40:58 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:26.221 12:40:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.221 12:40:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:26.221 12:40:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.221 00:36:26.221 real 0m11.138s 00:36:26.221 user 0m40.536s 00:36:26.221 sys 0m1.756s 00:36:26.221 12:40:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:26.221 12:40:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:26.221 ************************************ 00:36:26.221 END TEST fio_dif_digest 00:36:26.221 ************************************ 00:36:26.221 12:40:59 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:26.221 12:40:59 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:26.221 12:40:59 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:26.221 12:40:59 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:26.221 12:40:59 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:26.221 12:40:59 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:26.221 12:40:59 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:26.221 12:40:59 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:26.221 rmmod nvme_tcp 00:36:26.221 rmmod nvme_fabrics 00:36:26.221 rmmod nvme_keyring 00:36:26.221 12:40:59 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:26.221 12:40:59 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:26.221 12:40:59 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:26.221 12:40:59 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 1939984 ']' 00:36:26.221 12:40:59 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 1939984 00:36:26.221 12:40:59 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1939984 ']' 00:36:26.221 12:40:59 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1939984 00:36:26.221 12:40:59 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:36:26.221 12:40:59 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:26.221 12:40:59 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1939984 00:36:26.221 12:40:59 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:26.221 12:40:59 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:26.221 12:40:59 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1939984' 00:36:26.221 killing process with pid 1939984 00:36:26.221 12:40:59 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1939984 00:36:26.221 12:40:59 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1939984 00:36:26.221 12:40:59 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:36:26.221 12:40:59 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:28.135 Waiting for block devices as requested 00:36:28.135 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:28.135 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:28.135 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:28.395 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:28.395 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:28.395 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:28.655 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:28.655 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:28.655 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:28.917 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:28.917 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:28.917 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:29.176 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:29.176 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:29.176 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:29.176 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:29.437 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:29.698 12:41:04 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:29.698 12:41:04 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:29.698 12:41:04 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:29.698 12:41:04 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:36:29.698 12:41:04 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:29.698 12:41:04 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:36:29.698 12:41:04 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:29.698 12:41:04 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:29.698 12:41:04 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:29.698 12:41:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:29.698 12:41:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:31.608 12:41:06 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:31.869 00:36:31.869 real 1m16.689s 00:36:31.869 user 8m1.212s 00:36:31.869 sys 0m21.148s 00:36:31.869 12:41:06 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:31.869 12:41:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:31.869 ************************************ 00:36:31.869 END TEST nvmf_dif 00:36:31.869 ************************************ 00:36:31.869 12:41:06 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:31.869 12:41:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:31.869 12:41:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:31.869 12:41:06 -- common/autotest_common.sh@10 -- # set +x 00:36:31.869 ************************************ 00:36:31.869 START TEST nvmf_abort_qd_sizes 00:36:31.869 ************************************ 00:36:31.869 12:41:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:31.869 * Looking for test storage... 00:36:31.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:31.869 12:41:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:31.869 12:41:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:36:31.869 12:41:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:32.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.129 --rc genhtml_branch_coverage=1 00:36:32.129 --rc genhtml_function_coverage=1 00:36:32.129 --rc genhtml_legend=1 00:36:32.129 --rc geninfo_all_blocks=1 00:36:32.129 --rc geninfo_unexecuted_blocks=1 00:36:32.129 00:36:32.129 ' 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:32.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.129 --rc genhtml_branch_coverage=1 00:36:32.129 --rc genhtml_function_coverage=1 00:36:32.129 --rc genhtml_legend=1 00:36:32.129 --rc geninfo_all_blocks=1 00:36:32.129 --rc geninfo_unexecuted_blocks=1 00:36:32.129 00:36:32.129 ' 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:32.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.129 --rc genhtml_branch_coverage=1 00:36:32.129 --rc genhtml_function_coverage=1 00:36:32.129 --rc genhtml_legend=1 00:36:32.129 --rc geninfo_all_blocks=1 00:36:32.129 --rc geninfo_unexecuted_blocks=1 00:36:32.129 00:36:32.129 ' 00:36:32.129 12:41:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:32.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.129 --rc genhtml_branch_coverage=1 00:36:32.129 --rc genhtml_function_coverage=1 00:36:32.129 --rc genhtml_legend=1 00:36:32.129 --rc geninfo_all_blocks=1 00:36:32.129 --rc geninfo_unexecuted_blocks=1 00:36:32.129 00:36:32.129 ' 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:32.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:36:32.130 12:41:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:40.264 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:40.264 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:40.264 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:40.264 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:40.265 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:40.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:40.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:36:40.265 00:36:40.265 --- 10.0.0.2 ping statistics --- 00:36:40.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.265 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:40.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:40.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:36:40.265 00:36:40.265 --- 10.0.0.1 ping statistics --- 00:36:40.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.265 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:36:40.265 12:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:42.174 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:42.174 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:42.174 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:42.174 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:42.174 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:42.174 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:42.174 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:42.174 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:42.174 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:42.174 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:42.174 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:42.174 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:42.174 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:42.174 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:42.174 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:42.174 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:42.174 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:42.433 12:41:16 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:42.433 12:41:16 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:42.433 12:41:16 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:42.433 12:41:16 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:42.433 12:41:16 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:42.433 12:41:16 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:42.692 12:41:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:42.692 12:41:17 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:42.692 12:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:42.692 12:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:42.692 12:41:17 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=1959381 00:36:42.692 12:41:17 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 1959381 00:36:42.692 12:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1959381 ']' 00:36:42.692 12:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:42.692 12:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:42.692 12:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:42.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:42.692 12:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:42.692 12:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:42.692 12:41:17 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:42.692 [2024-11-04 12:41:17.094708] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:36:42.692 [2024-11-04 12:41:17.094770] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:42.692 [2024-11-04 12:41:17.163822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:42.692 [2024-11-04 12:41:17.203658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:42.692 [2024-11-04 12:41:17.203694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:42.692 [2024-11-04 12:41:17.203702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:42.692 [2024-11-04 12:41:17.203709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:42.692 [2024-11-04 12:41:17.203714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:42.692 [2024-11-04 12:41:17.205287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:42.692 [2024-11-04 12:41:17.205406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:42.692 [2024-11-04 12:41:17.205556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:42.692 [2024-11-04 12:41:17.205557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:43.627 12:41:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:43.627 ************************************ 00:36:43.627 START TEST spdk_target_abort 00:36:43.627 ************************************ 00:36:43.627 12:41:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:36:43.627 12:41:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:43.627 12:41:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:36:43.628 12:41:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.628 12:41:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.886 spdk_targetn1 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.886 [2024-11-04 12:41:18.288740] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.886 [2024-11-04 12:41:18.329034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:43.886 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:43.887 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:43.887 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:43.887 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:43.887 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:43.887 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:43.887 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:43.887 12:41:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:44.145 [2024-11-04 12:41:18.523835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:312 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:36:44.145 [2024-11-04 12:41:18.523861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0028 p:1 m:0 dnr:0 00:36:44.145 [2024-11-04 12:41:18.524880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:400 len:8 PRP1 0x200004abe000 PRP2 0x0 00:36:44.145 [2024-11-04 12:41:18.524893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0034 p:1 m:0 dnr:0 00:36:44.145 [2024-11-04 12:41:18.531195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:560 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:36:44.145 [2024-11-04 12:41:18.531210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0049 p:1 m:0 dnr:0 00:36:44.145 [2024-11-04 12:41:18.546130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1152 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:36:44.145 [2024-11-04 12:41:18.546146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0091 p:1 m:0 dnr:0 00:36:44.145 [2024-11-04 12:41:18.546203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1168 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:36:44.145 [2024-11-04 12:41:18.546210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0094 p:1 m:0 dnr:0 00:36:44.145 [2024-11-04 12:41:18.570249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2024 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:36:44.145 [2024-11-04 12:41:18.570265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00fe p:1 m:0 dnr:0 00:36:44.145 [2024-11-04 12:41:18.601213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3184 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:36:44.145 [2024-11-04 12:41:18.601233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0090 p:0 m:0 dnr:0 00:36:44.146 [2024-11-04 12:41:18.617261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3768 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:36:44.146 [2024-11-04 12:41:18.617277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00d9 p:0 m:0 dnr:0 00:36:47.442 Initializing NVMe Controllers 00:36:47.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:47.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:47.442 Initialization complete. Launching workers. 00:36:47.442 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12643, failed: 8 00:36:47.442 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3162, failed to submit 9489 00:36:47.442 success 738, unsuccessful 2424, failed 0 00:36:47.442 12:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:47.443 12:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:47.443 [2024-11-04 12:41:21.770065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:216 len:8 PRP1 0x200004e50000 PRP2 0x0 00:36:47.443 [2024-11-04 12:41:21.770107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:36:47.443 [2024-11-04 12:41:21.832001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:1584 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:36:47.443 [2024-11-04 12:41:21.832026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:00d7 p:1 m:0 dnr:0 00:36:47.443 [2024-11-04 12:41:21.862891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:2456 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:36:47.443 [2024-11-04 12:41:21.862913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:47.443 [2024-11-04 12:41:21.901895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:3448 len:8 PRP1 0x200004e42000 PRP2 0x0 00:36:47.443 [2024-11-04 12:41:21.901918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:00b3 p:0 m:0 dnr:0 00:36:50.733 Initializing NVMe Controllers 00:36:50.733 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:50.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:50.733 Initialization complete. Launching workers. 00:36:50.733 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8647, failed: 4 00:36:50.733 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1241, failed to submit 7410 00:36:50.733 success 355, unsuccessful 886, failed 0 00:36:50.733 12:41:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:50.733 12:41:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:52.108 [2024-11-04 12:41:26.363522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:155 nsid:1 lba:147896 len:8 PRP1 0x200004afe000 PRP2 0x0 00:36:52.108 [2024-11-04 12:41:26.363560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:155 cdw0:0 sqhd:00a5 p:0 m:0 dnr:0 00:36:54.010 Initializing NVMe Controllers 00:36:54.010 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:54.010 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:54.010 Initialization complete. Launching workers. 00:36:54.010 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41722, failed: 1 00:36:54.010 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2645, failed to submit 39078 00:36:54.010 success 613, unsuccessful 2032, failed 0 00:36:54.010 12:41:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:54.010 12:41:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.010 12:41:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:54.010 12:41:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.010 12:41:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:54.010 12:41:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.010 12:41:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:55.385 12:41:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.385 12:41:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1959381 00:36:55.385 12:41:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1959381 ']' 00:36:55.385 12:41:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1959381 00:36:55.385 12:41:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:36:55.385 12:41:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:55.385 12:41:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1959381 00:36:55.645 12:41:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:55.645 12:41:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:55.645 12:41:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1959381' 00:36:55.645 killing process with pid 1959381 00:36:55.645 12:41:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1959381 00:36:55.645 12:41:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1959381 00:36:55.645 00:36:55.645 real 0m12.116s 00:36:55.645 user 0m49.573s 00:36:55.645 sys 0m1.808s 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:55.645 ************************************ 00:36:55.645 END TEST spdk_target_abort 00:36:55.645 ************************************ 00:36:55.645 12:41:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:55.645 12:41:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:55.645 12:41:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:55.645 12:41:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:55.645 ************************************ 00:36:55.645 START TEST kernel_target_abort 00:36:55.645 ************************************ 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:55.645 12:41:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:58.944 Waiting for block devices as requested 00:36:59.205 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:59.205 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:59.205 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:59.205 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:59.467 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:59.467 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:59.467 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:59.727 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:59.727 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:59.988 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:59.988 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:59.988 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:00.249 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:00.249 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:00.249 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:00.249 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:00.511 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:00.771 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:37:00.771 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:00.771 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:37:00.771 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:00.771 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:00.771 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:00.771 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:37:00.771 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:00.771 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:00.771 No valid GPT data, bailing 00:37:00.771 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:00.771 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:00.771 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:00.771 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:37:00.771 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:37:00.771 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:00.771 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:00.771 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:00.772 00:37:00.772 Discovery Log Number of Records 2, Generation counter 2 00:37:00.772 =====Discovery Log Entry 0====== 00:37:00.772 trtype: tcp 00:37:00.772 adrfam: ipv4 00:37:00.772 subtype: current discovery subsystem 00:37:00.772 treq: not specified, sq flow control disable supported 00:37:00.772 portid: 1 00:37:00.772 trsvcid: 4420 00:37:00.772 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:00.772 traddr: 10.0.0.1 00:37:00.772 eflags: none 00:37:00.772 sectype: none 00:37:00.772 =====Discovery Log Entry 1====== 00:37:00.772 trtype: tcp 00:37:00.772 adrfam: ipv4 00:37:00.772 subtype: nvme subsystem 00:37:00.772 treq: not specified, sq flow control disable supported 00:37:00.772 portid: 1 00:37:00.772 trsvcid: 4420 00:37:00.772 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:00.772 traddr: 10.0.0.1 00:37:00.772 eflags: none 00:37:00.772 sectype: none 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:00.772 12:41:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:04.071 Initializing NVMe Controllers 00:37:04.072 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:04.072 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:04.072 Initialization complete. Launching workers. 00:37:04.072 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66780, failed: 0 00:37:04.072 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 66780, failed to submit 0 00:37:04.072 success 0, unsuccessful 66780, failed 0 00:37:04.072 12:41:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:04.072 12:41:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:07.371 Initializing NVMe Controllers 00:37:07.371 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:07.371 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:07.371 Initialization complete. Launching workers. 00:37:07.371 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 107491, failed: 0 00:37:07.371 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27098, failed to submit 80393 00:37:07.371 success 0, unsuccessful 27098, failed 0 00:37:07.371 12:41:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:07.371 12:41:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:10.670 Initializing NVMe Controllers 00:37:10.671 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:10.671 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:10.671 Initialization complete. Launching workers. 00:37:10.671 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101212, failed: 0 00:37:10.671 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25310, failed to submit 75902 00:37:10.671 success 0, unsuccessful 25310, failed 0 00:37:10.671 12:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:10.671 12:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:10.671 12:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:37:10.671 12:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:10.671 12:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:10.671 12:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:10.671 12:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:10.671 12:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:37:10.671 12:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:37:10.671 12:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:13.372 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:13.372 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:13.372 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:13.372 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:13.372 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:13.372 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:13.372 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:13.372 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:13.372 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:13.372 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:13.372 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:13.372 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:13.372 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:13.372 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:13.372 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:13.372 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:15.283 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:15.283 00:37:15.283 real 0m19.679s 00:37:15.283 user 0m9.571s 00:37:15.283 sys 0m5.756s 00:37:15.283 12:41:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:15.283 12:41:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:15.283 ************************************ 00:37:15.283 END TEST kernel_target_abort 00:37:15.283 ************************************ 00:37:15.589 12:41:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:15.589 12:41:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:15.589 12:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:15.589 12:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:15.589 12:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:15.589 12:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:15.589 12:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:15.589 12:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:15.589 rmmod nvme_tcp 00:37:15.589 rmmod nvme_fabrics 00:37:15.589 rmmod nvme_keyring 00:37:15.589 12:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:15.589 12:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:15.589 12:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:15.589 12:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 1959381 ']' 00:37:15.589 12:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 1959381 00:37:15.589 12:41:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1959381 ']' 00:37:15.589 12:41:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1959381 00:37:15.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1959381) - No such process 00:37:15.589 12:41:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1959381 is not found' 00:37:15.589 Process with pid 1959381 is not found 00:37:15.589 12:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:37:15.589 12:41:49 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:18.888 Waiting for block devices as requested 00:37:18.888 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:18.888 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:18.888 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:19.149 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:19.149 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:19.149 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:19.149 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:19.409 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:19.409 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:19.669 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:19.669 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:19.669 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:19.928 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:19.928 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:19.928 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:19.928 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:20.189 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:20.449 12:41:54 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:20.449 12:41:54 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:20.449 12:41:54 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:20.449 12:41:54 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:37:20.449 12:41:54 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:20.449 12:41:54 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:37:20.449 12:41:54 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:20.449 12:41:54 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:20.449 12:41:54 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:20.449 12:41:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:20.449 12:41:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.359 12:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:22.359 00:37:22.359 real 0m50.639s 00:37:22.359 user 1m4.014s 00:37:22.359 sys 0m18.110s 00:37:22.359 12:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:22.359 12:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:22.359 ************************************ 00:37:22.359 END TEST nvmf_abort_qd_sizes 00:37:22.359 ************************************ 00:37:22.619 12:41:56 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:22.619 12:41:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:22.619 12:41:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:22.619 12:41:56 -- common/autotest_common.sh@10 -- # set +x 00:37:22.619 ************************************ 00:37:22.619 START TEST keyring_file 00:37:22.619 ************************************ 00:37:22.619 12:41:56 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:22.619 * Looking for test storage... 00:37:22.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:22.619 12:41:57 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:22.619 12:41:57 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:37:22.619 12:41:57 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:22.620 12:41:57 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:22.620 12:41:57 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:22.620 12:41:57 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:22.620 12:41:57 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:22.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.620 --rc genhtml_branch_coverage=1 00:37:22.620 --rc genhtml_function_coverage=1 00:37:22.620 --rc genhtml_legend=1 00:37:22.620 --rc geninfo_all_blocks=1 00:37:22.620 --rc geninfo_unexecuted_blocks=1 00:37:22.620 00:37:22.620 ' 00:37:22.620 12:41:57 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:22.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.620 --rc genhtml_branch_coverage=1 00:37:22.620 --rc genhtml_function_coverage=1 00:37:22.620 --rc genhtml_legend=1 00:37:22.620 --rc geninfo_all_blocks=1 00:37:22.620 --rc geninfo_unexecuted_blocks=1 00:37:22.620 00:37:22.620 ' 00:37:22.620 12:41:57 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:22.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.620 --rc genhtml_branch_coverage=1 00:37:22.620 --rc genhtml_function_coverage=1 00:37:22.620 --rc genhtml_legend=1 00:37:22.620 --rc geninfo_all_blocks=1 00:37:22.620 --rc geninfo_unexecuted_blocks=1 00:37:22.620 00:37:22.620 ' 00:37:22.620 12:41:57 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:22.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.620 --rc genhtml_branch_coverage=1 00:37:22.620 --rc genhtml_function_coverage=1 00:37:22.620 --rc genhtml_legend=1 00:37:22.620 --rc geninfo_all_blocks=1 00:37:22.620 --rc geninfo_unexecuted_blocks=1 00:37:22.620 00:37:22.620 ' 00:37:22.620 12:41:57 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:22.620 12:41:57 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:22.882 12:41:57 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:22.882 12:41:57 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:22.882 12:41:57 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:22.882 12:41:57 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:22.882 12:41:57 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.882 12:41:57 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.882 12:41:57 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.882 12:41:57 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:22.882 12:41:57 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:22.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:22.882 12:41:57 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:22.882 12:41:57 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:22.882 12:41:57 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:22.882 12:41:57 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:22.882 12:41:57 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:22.882 12:41:57 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:22.882 12:41:57 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:22.882 12:41:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:22.882 12:41:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:22.882 12:41:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:22.882 12:41:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:22.882 12:41:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:22.882 12:41:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SQuCMxVq95 00:37:22.882 12:41:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:22.882 12:41:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SQuCMxVq95 00:37:22.882 12:41:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SQuCMxVq95 00:37:22.882 12:41:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.SQuCMxVq95 00:37:22.882 12:41:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:22.882 12:41:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:22.882 12:41:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:22.882 12:41:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:22.882 12:41:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:22.882 12:41:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:22.882 12:41:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SMKouOxiEe 00:37:22.882 12:41:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:22.882 12:41:57 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:22.882 12:41:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SMKouOxiEe 00:37:22.882 12:41:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SMKouOxiEe 00:37:22.882 12:41:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.SMKouOxiEe 00:37:22.882 12:41:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=1969515 00:37:22.882 12:41:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1969515 00:37:22.882 12:41:57 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:22.882 12:41:57 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1969515 ']' 00:37:22.882 12:41:57 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:22.882 12:41:57 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:22.882 12:41:57 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:22.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:22.882 12:41:57 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:22.882 12:41:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:22.882 [2024-11-04 12:41:57.408182] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:37:22.882 [2024-11-04 12:41:57.408262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1969515 ] 00:37:23.142 [2024-11-04 12:41:57.475705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.142 [2024-11-04 12:41:57.520730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:23.712 12:41:58 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:23.712 [2024-11-04 12:41:58.209403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:23.712 null0 00:37:23.712 [2024-11-04 12:41:58.241446] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:23.712 [2024-11-04 12:41:58.241830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.712 12:41:58 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:23.712 [2024-11-04 12:41:58.269499] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:23.712 request: 00:37:23.712 { 00:37:23.712 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:23.712 "secure_channel": false, 00:37:23.712 "listen_address": { 00:37:23.712 "trtype": "tcp", 00:37:23.712 "traddr": "127.0.0.1", 00:37:23.712 "trsvcid": "4420" 00:37:23.712 }, 00:37:23.712 "method": "nvmf_subsystem_add_listener", 00:37:23.712 "req_id": 1 00:37:23.712 } 00:37:23.712 Got JSON-RPC error response 00:37:23.712 response: 00:37:23.712 { 00:37:23.712 "code": -32602, 00:37:23.712 "message": "Invalid parameters" 00:37:23.712 } 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:23.712 12:41:58 keyring_file -- keyring/file.sh@47 -- # bperfpid=1969606 00:37:23.712 12:41:58 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1969606 /var/tmp/bperf.sock 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1969606 ']' 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:23.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:23.712 12:41:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:23.712 12:41:58 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:23.972 [2024-11-04 12:41:58.324717] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:37:23.972 [2024-11-04 12:41:58.324771] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1969606 ] 00:37:23.972 [2024-11-04 12:41:58.398989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.972 [2024-11-04 12:41:58.434661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:24.541 12:41:59 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:24.542 12:41:59 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:24.542 12:41:59 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SQuCMxVq95 00:37:24.542 12:41:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SQuCMxVq95 00:37:24.801 12:41:59 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SMKouOxiEe 00:37:24.801 12:41:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SMKouOxiEe 00:37:25.062 12:41:59 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:25.062 12:41:59 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:25.062 12:41:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:25.062 12:41:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:25.062 12:41:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:25.062 12:41:59 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.SQuCMxVq95 == \/\t\m\p\/\t\m\p\.\S\Q\u\C\M\x\V\q\9\5 ]] 00:37:25.062 12:41:59 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:25.062 12:41:59 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:25.062 12:41:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:25.062 12:41:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:25.062 12:41:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:25.322 12:41:59 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.SMKouOxiEe == \/\t\m\p\/\t\m\p\.\S\M\K\o\u\O\x\i\E\e ]] 00:37:25.322 12:41:59 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:25.322 12:41:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:25.322 12:41:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:25.322 12:41:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:25.322 12:41:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:25.322 12:41:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:25.581 12:41:59 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:25.581 12:41:59 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:25.581 12:41:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:25.581 12:41:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:25.581 12:41:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:25.582 12:41:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:25.582 12:41:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:25.582 12:42:00 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:25.582 12:42:00 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:25.582 12:42:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:25.842 [2024-11-04 12:42:00.292836] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:25.842 nvme0n1 00:37:25.842 12:42:00 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:25.842 12:42:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:25.842 12:42:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:25.842 12:42:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:25.842 12:42:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:25.842 12:42:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.102 12:42:00 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:26.102 12:42:00 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:26.102 12:42:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:26.102 12:42:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:26.102 12:42:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:26.102 12:42:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.102 12:42:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:26.362 12:42:00 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:26.362 12:42:00 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:26.362 Running I/O for 1 seconds... 00:37:27.300 15217.00 IOPS, 59.44 MiB/s 00:37:27.300 Latency(us) 00:37:27.300 [2024-11-04T11:42:01.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.300 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:27.300 nvme0n1 : 1.01 15253.37 59.58 0.00 0.00 8364.07 3454.29 13161.81 00:37:27.300 [2024-11-04T11:42:01.870Z] =================================================================================================================== 00:37:27.300 [2024-11-04T11:42:01.870Z] Total : 15253.37 59.58 0.00 0.00 8364.07 3454.29 13161.81 00:37:27.300 { 00:37:27.300 "results": [ 00:37:27.300 { 00:37:27.300 "job": "nvme0n1", 00:37:27.300 "core_mask": "0x2", 00:37:27.300 "workload": "randrw", 00:37:27.300 "percentage": 50, 00:37:27.300 "status": "finished", 00:37:27.300 "queue_depth": 128, 00:37:27.300 "io_size": 4096, 00:37:27.300 "runtime": 1.006138, 00:37:27.300 "iops": 15253.374785566195, 00:37:27.300 "mibps": 59.58349525611795, 00:37:27.300 "io_failed": 0, 00:37:27.300 "io_timeout": 0, 00:37:27.300 "avg_latency_us": 8364.07408005908, 00:37:27.300 "min_latency_us": 3454.2933333333335, 00:37:27.300 "max_latency_us": 13161.813333333334 00:37:27.300 } 00:37:27.300 ], 00:37:27.300 "core_count": 1 00:37:27.300 } 00:37:27.300 12:42:01 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:27.300 12:42:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:27.560 12:42:02 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:27.560 12:42:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:27.560 12:42:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:27.560 12:42:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:27.560 12:42:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:27.560 12:42:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:27.820 12:42:02 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:27.820 12:42:02 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:27.820 12:42:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:27.820 12:42:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:27.820 12:42:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:27.820 12:42:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:27.820 12:42:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:27.820 12:42:02 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:27.820 12:42:02 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:27.820 12:42:02 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:27.820 12:42:02 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:27.820 12:42:02 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:27.820 12:42:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:27.820 12:42:02 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:27.820 12:42:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:27.820 12:42:02 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:27.820 12:42:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:28.080 [2024-11-04 12:42:02.544889] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:28.080 [2024-11-04 12:42:02.545621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206bcc0 (107): Transport endpoint is not connected 00:37:28.080 [2024-11-04 12:42:02.546618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206bcc0 (9): Bad file descriptor 00:37:28.080 [2024-11-04 12:42:02.547619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:28.080 [2024-11-04 12:42:02.547627] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:28.080 [2024-11-04 12:42:02.547633] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:28.080 [2024-11-04 12:42:02.547646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:28.081 request: 00:37:28.081 { 00:37:28.081 "name": "nvme0", 00:37:28.081 "trtype": "tcp", 00:37:28.081 "traddr": "127.0.0.1", 00:37:28.081 "adrfam": "ipv4", 00:37:28.081 "trsvcid": "4420", 00:37:28.081 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:28.081 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:28.081 "prchk_reftag": false, 00:37:28.081 "prchk_guard": false, 00:37:28.081 "hdgst": false, 00:37:28.081 "ddgst": false, 00:37:28.081 "psk": "key1", 00:37:28.081 "allow_unrecognized_csi": false, 00:37:28.081 "method": "bdev_nvme_attach_controller", 00:37:28.081 "req_id": 1 00:37:28.081 } 00:37:28.081 Got JSON-RPC error response 00:37:28.081 response: 00:37:28.081 { 00:37:28.081 "code": -5, 00:37:28.081 "message": "Input/output error" 00:37:28.081 } 00:37:28.081 12:42:02 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:28.081 12:42:02 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:28.081 12:42:02 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:28.081 12:42:02 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:28.081 12:42:02 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:28.081 12:42:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:28.081 12:42:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:28.081 12:42:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:28.081 12:42:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:28.081 12:42:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:28.340 12:42:02 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:28.340 12:42:02 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:28.341 12:42:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:28.341 12:42:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:28.341 12:42:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:28.341 12:42:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:28.341 12:42:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:28.601 12:42:02 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:28.602 12:42:02 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:28.602 12:42:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:28.602 12:42:03 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:28.602 12:42:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:28.861 12:42:03 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:28.861 12:42:03 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:28.861 12:42:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:29.121 12:42:03 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:29.121 12:42:03 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.SQuCMxVq95 00:37:29.121 12:42:03 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.SQuCMxVq95 00:37:29.121 12:42:03 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:29.121 12:42:03 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.SQuCMxVq95 00:37:29.121 12:42:03 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:29.121 12:42:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:29.121 12:42:03 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:29.121 12:42:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:29.121 12:42:03 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SQuCMxVq95 00:37:29.121 12:42:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SQuCMxVq95 00:37:29.121 [2024-11-04 12:42:03.605366] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.SQuCMxVq95': 0100660 00:37:29.121 [2024-11-04 12:42:03.605384] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:29.121 request: 00:37:29.121 { 00:37:29.121 "name": "key0", 00:37:29.121 "path": "/tmp/tmp.SQuCMxVq95", 00:37:29.121 "method": "keyring_file_add_key", 00:37:29.121 "req_id": 1 00:37:29.121 } 00:37:29.121 Got JSON-RPC error response 00:37:29.121 response: 00:37:29.121 { 00:37:29.121 "code": -1, 00:37:29.121 "message": "Operation not permitted" 00:37:29.121 } 00:37:29.121 12:42:03 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:29.121 12:42:03 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:29.121 12:42:03 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:29.121 12:42:03 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:29.121 12:42:03 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.SQuCMxVq95 00:37:29.121 12:42:03 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SQuCMxVq95 00:37:29.121 12:42:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SQuCMxVq95 00:37:29.381 12:42:03 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.SQuCMxVq95 00:37:29.381 12:42:03 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:37:29.381 12:42:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:29.381 12:42:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:29.381 12:42:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:29.381 12:42:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:29.381 12:42:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:29.641 12:42:03 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:37:29.641 12:42:03 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:29.641 12:42:03 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:29.641 12:42:03 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:29.641 12:42:03 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:29.641 12:42:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:29.641 12:42:03 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:29.641 12:42:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:29.641 12:42:03 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:29.641 12:42:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:29.641 [2024-11-04 12:42:04.126694] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.SQuCMxVq95': No such file or directory 00:37:29.641 [2024-11-04 12:42:04.126708] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:29.641 [2024-11-04 12:42:04.126720] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:29.641 [2024-11-04 12:42:04.126726] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:37:29.641 [2024-11-04 12:42:04.126731] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:29.641 [2024-11-04 12:42:04.126736] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:29.641 request: 00:37:29.641 { 00:37:29.641 "name": "nvme0", 00:37:29.641 "trtype": "tcp", 00:37:29.641 "traddr": "127.0.0.1", 00:37:29.641 "adrfam": "ipv4", 00:37:29.641 "trsvcid": "4420", 00:37:29.641 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:29.641 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:29.641 "prchk_reftag": false, 00:37:29.641 "prchk_guard": false, 00:37:29.641 "hdgst": false, 00:37:29.641 "ddgst": false, 00:37:29.641 "psk": "key0", 00:37:29.641 "allow_unrecognized_csi": false, 00:37:29.641 "method": "bdev_nvme_attach_controller", 00:37:29.641 "req_id": 1 00:37:29.641 } 00:37:29.641 Got JSON-RPC error response 00:37:29.641 response: 00:37:29.641 { 00:37:29.641 "code": -19, 00:37:29.641 "message": "No such device" 00:37:29.641 } 00:37:29.641 12:42:04 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:29.641 12:42:04 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:29.641 12:42:04 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:29.641 12:42:04 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:29.641 12:42:04 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:37:29.641 12:42:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:29.902 12:42:04 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:29.902 12:42:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:29.902 12:42:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:29.902 12:42:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:29.902 12:42:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:29.902 12:42:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:29.902 12:42:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.cxqWpuZv83 00:37:29.902 12:42:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:29.902 12:42:04 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:29.902 12:42:04 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:29.902 12:42:04 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:29.902 12:42:04 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:29.902 12:42:04 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:29.902 12:42:04 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:29.902 12:42:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cxqWpuZv83 00:37:29.902 12:42:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.cxqWpuZv83 00:37:29.902 12:42:04 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.cxqWpuZv83 00:37:29.902 12:42:04 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cxqWpuZv83 00:37:29.902 12:42:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cxqWpuZv83 00:37:30.162 12:42:04 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:30.162 12:42:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:30.423 nvme0n1 00:37:30.423 12:42:04 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:37:30.423 12:42:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:30.423 12:42:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:30.423 12:42:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:30.423 12:42:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:30.423 12:42:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:30.423 12:42:04 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:37:30.423 12:42:04 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:37:30.423 12:42:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:30.683 12:42:05 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:37:30.683 12:42:05 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:37:30.683 12:42:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:30.683 12:42:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:30.683 12:42:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:30.942 12:42:05 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:37:30.942 12:42:05 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:37:30.942 12:42:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:30.942 12:42:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:30.942 12:42:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:30.942 12:42:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:30.942 12:42:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:30.942 12:42:05 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:37:30.942 12:42:05 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:30.942 12:42:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:31.202 12:42:05 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:37:31.202 12:42:05 keyring_file -- keyring/file.sh@105 -- # jq length 00:37:31.202 12:42:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.462 12:42:05 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:37:31.462 12:42:05 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cxqWpuZv83 00:37:31.462 12:42:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cxqWpuZv83 00:37:31.462 12:42:05 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SMKouOxiEe 00:37:31.462 12:42:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SMKouOxiEe 00:37:31.722 12:42:06 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:31.722 12:42:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:31.981 nvme0n1 00:37:31.981 12:42:06 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:37:31.981 12:42:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:32.242 12:42:06 keyring_file -- keyring/file.sh@113 -- # config='{ 00:37:32.242 "subsystems": [ 00:37:32.242 { 00:37:32.242 "subsystem": "keyring", 00:37:32.242 "config": [ 00:37:32.242 { 00:37:32.242 "method": "keyring_file_add_key", 00:37:32.242 "params": { 00:37:32.242 "name": "key0", 00:37:32.242 "path": "/tmp/tmp.cxqWpuZv83" 00:37:32.242 } 00:37:32.242 }, 00:37:32.242 { 00:37:32.242 "method": "keyring_file_add_key", 00:37:32.242 "params": { 00:37:32.242 "name": "key1", 00:37:32.242 "path": "/tmp/tmp.SMKouOxiEe" 00:37:32.242 } 00:37:32.242 } 00:37:32.242 ] 00:37:32.242 }, 00:37:32.242 { 00:37:32.242 "subsystem": "iobuf", 00:37:32.242 "config": [ 00:37:32.242 { 00:37:32.242 "method": "iobuf_set_options", 00:37:32.242 "params": { 00:37:32.242 "small_pool_count": 8192, 00:37:32.242 "large_pool_count": 1024, 00:37:32.242 "small_bufsize": 8192, 00:37:32.242 "large_bufsize": 135168 00:37:32.242 } 00:37:32.242 } 00:37:32.242 ] 00:37:32.242 }, 00:37:32.242 { 00:37:32.242 "subsystem": "sock", 00:37:32.242 "config": [ 00:37:32.242 { 00:37:32.242 "method": "sock_set_default_impl", 00:37:32.242 "params": { 00:37:32.242 "impl_name": "posix" 00:37:32.242 } 00:37:32.242 }, 00:37:32.242 { 00:37:32.242 "method": "sock_impl_set_options", 00:37:32.242 "params": { 00:37:32.242 "impl_name": "ssl", 00:37:32.242 "recv_buf_size": 4096, 00:37:32.242 "send_buf_size": 4096, 00:37:32.242 "enable_recv_pipe": true, 00:37:32.242 "enable_quickack": false, 00:37:32.242 "enable_placement_id": 0, 00:37:32.242 "enable_zerocopy_send_server": true, 00:37:32.242 "enable_zerocopy_send_client": false, 00:37:32.242 "zerocopy_threshold": 0, 00:37:32.242 "tls_version": 0, 00:37:32.242 "enable_ktls": false 00:37:32.242 } 00:37:32.242 }, 00:37:32.242 { 00:37:32.242 "method": "sock_impl_set_options", 00:37:32.242 "params": { 00:37:32.242 "impl_name": "posix", 00:37:32.242 "recv_buf_size": 2097152, 00:37:32.242 "send_buf_size": 2097152, 00:37:32.242 "enable_recv_pipe": true, 00:37:32.242 "enable_quickack": false, 00:37:32.242 "enable_placement_id": 0, 00:37:32.242 "enable_zerocopy_send_server": true, 00:37:32.242 "enable_zerocopy_send_client": false, 00:37:32.242 "zerocopy_threshold": 0, 00:37:32.242 "tls_version": 0, 00:37:32.242 "enable_ktls": false 00:37:32.242 } 00:37:32.242 } 00:37:32.242 ] 00:37:32.242 }, 00:37:32.242 { 00:37:32.242 "subsystem": "vmd", 00:37:32.242 "config": [] 00:37:32.242 }, 00:37:32.242 { 00:37:32.242 "subsystem": "accel", 00:37:32.242 "config": [ 00:37:32.242 { 00:37:32.242 "method": "accel_set_options", 00:37:32.242 "params": { 00:37:32.242 "small_cache_size": 128, 00:37:32.242 "large_cache_size": 16, 00:37:32.242 "task_count": 2048, 00:37:32.242 "sequence_count": 2048, 00:37:32.242 "buf_count": 2048 00:37:32.242 } 00:37:32.242 } 00:37:32.242 ] 00:37:32.242 }, 00:37:32.242 { 00:37:32.242 "subsystem": "bdev", 00:37:32.242 "config": [ 00:37:32.242 { 00:37:32.242 "method": "bdev_set_options", 00:37:32.242 "params": { 00:37:32.242 "bdev_io_pool_size": 65535, 00:37:32.242 "bdev_io_cache_size": 256, 00:37:32.242 "bdev_auto_examine": true, 00:37:32.242 "iobuf_small_cache_size": 128, 00:37:32.242 "iobuf_large_cache_size": 16 00:37:32.242 } 00:37:32.242 }, 00:37:32.242 { 00:37:32.242 "method": "bdev_raid_set_options", 00:37:32.242 "params": { 00:37:32.242 "process_window_size_kb": 1024, 00:37:32.242 "process_max_bandwidth_mb_sec": 0 00:37:32.242 } 00:37:32.242 }, 00:37:32.242 { 00:37:32.242 "method": "bdev_iscsi_set_options", 00:37:32.242 "params": { 00:37:32.242 "timeout_sec": 30 00:37:32.242 } 00:37:32.242 }, 00:37:32.242 { 00:37:32.242 "method": "bdev_nvme_set_options", 00:37:32.242 "params": { 00:37:32.242 "action_on_timeout": "none", 00:37:32.242 "timeout_us": 0, 00:37:32.242 "timeout_admin_us": 0, 00:37:32.242 "keep_alive_timeout_ms": 10000, 00:37:32.242 "arbitration_burst": 0, 00:37:32.242 "low_priority_weight": 0, 00:37:32.242 "medium_priority_weight": 0, 00:37:32.242 "high_priority_weight": 0, 00:37:32.242 "nvme_adminq_poll_period_us": 10000, 00:37:32.242 "nvme_ioq_poll_period_us": 0, 00:37:32.242 "io_queue_requests": 512, 00:37:32.242 "delay_cmd_submit": true, 00:37:32.242 "transport_retry_count": 4, 00:37:32.242 "bdev_retry_count": 3, 00:37:32.243 "transport_ack_timeout": 0, 00:37:32.243 "ctrlr_loss_timeout_sec": 0, 00:37:32.243 "reconnect_delay_sec": 0, 00:37:32.243 "fast_io_fail_timeout_sec": 0, 00:37:32.243 "disable_auto_failback": false, 00:37:32.243 "generate_uuids": false, 00:37:32.243 "transport_tos": 0, 00:37:32.243 "nvme_error_stat": false, 00:37:32.243 "rdma_srq_size": 0, 00:37:32.243 "io_path_stat": false, 00:37:32.243 "allow_accel_sequence": false, 00:37:32.243 "rdma_max_cq_size": 0, 00:37:32.243 "rdma_cm_event_timeout_ms": 0, 00:37:32.243 "dhchap_digests": [ 00:37:32.243 "sha256", 00:37:32.243 "sha384", 00:37:32.243 "sha512" 00:37:32.243 ], 00:37:32.243 "dhchap_dhgroups": [ 00:37:32.243 "null", 00:37:32.243 "ffdhe2048", 00:37:32.243 "ffdhe3072", 00:37:32.243 "ffdhe4096", 00:37:32.243 "ffdhe6144", 00:37:32.243 "ffdhe8192" 00:37:32.243 ] 00:37:32.243 } 00:37:32.243 }, 00:37:32.243 { 00:37:32.243 "method": "bdev_nvme_attach_controller", 00:37:32.243 "params": { 00:37:32.243 "name": "nvme0", 00:37:32.243 "trtype": "TCP", 00:37:32.243 "adrfam": "IPv4", 00:37:32.243 "traddr": "127.0.0.1", 00:37:32.243 "trsvcid": "4420", 00:37:32.243 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:32.243 "prchk_reftag": false, 00:37:32.243 "prchk_guard": false, 00:37:32.243 "ctrlr_loss_timeout_sec": 0, 00:37:32.243 "reconnect_delay_sec": 0, 00:37:32.243 "fast_io_fail_timeout_sec": 0, 00:37:32.243 "psk": "key0", 00:37:32.243 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:32.243 "hdgst": false, 00:37:32.243 "ddgst": false, 00:37:32.243 "multipath": "multipath" 00:37:32.243 } 00:37:32.243 }, 00:37:32.243 { 00:37:32.243 "method": "bdev_nvme_set_hotplug", 00:37:32.243 "params": { 00:37:32.243 "period_us": 100000, 00:37:32.243 "enable": false 00:37:32.243 } 00:37:32.243 }, 00:37:32.243 { 00:37:32.243 "method": "bdev_wait_for_examine" 00:37:32.243 } 00:37:32.243 ] 00:37:32.243 }, 00:37:32.243 { 00:37:32.243 "subsystem": "nbd", 00:37:32.243 "config": [] 00:37:32.243 } 00:37:32.243 ] 00:37:32.243 }' 00:37:32.243 12:42:06 keyring_file -- keyring/file.sh@115 -- # killprocess 1969606 00:37:32.243 12:42:06 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1969606 ']' 00:37:32.243 12:42:06 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1969606 00:37:32.243 12:42:06 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:32.243 12:42:06 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:32.243 12:42:06 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1969606 00:37:32.243 12:42:06 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:32.243 12:42:06 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:32.243 12:42:06 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1969606' 00:37:32.243 killing process with pid 1969606 00:37:32.243 12:42:06 keyring_file -- common/autotest_common.sh@969 -- # kill 1969606 00:37:32.243 Received shutdown signal, test time was about 1.000000 seconds 00:37:32.243 00:37:32.243 Latency(us) 00:37:32.243 [2024-11-04T11:42:06.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.243 [2024-11-04T11:42:06.813Z] =================================================================================================================== 00:37:32.243 [2024-11-04T11:42:06.813Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:32.243 12:42:06 keyring_file -- common/autotest_common.sh@974 -- # wait 1969606 00:37:32.243 12:42:06 keyring_file -- keyring/file.sh@118 -- # bperfpid=1971520 00:37:32.243 12:42:06 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1971520 /var/tmp/bperf.sock 00:37:32.243 12:42:06 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1971520 ']' 00:37:32.243 12:42:06 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:32.243 12:42:06 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:32.243 12:42:06 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:32.243 12:42:06 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:32.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:32.243 12:42:06 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:32.243 12:42:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:32.243 12:42:06 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:37:32.243 "subsystems": [ 00:37:32.243 { 00:37:32.243 "subsystem": "keyring", 00:37:32.243 "config": [ 00:37:32.243 { 00:37:32.243 "method": "keyring_file_add_key", 00:37:32.243 "params": { 00:37:32.243 "name": "key0", 00:37:32.243 "path": "/tmp/tmp.cxqWpuZv83" 00:37:32.243 } 00:37:32.243 }, 00:37:32.243 { 00:37:32.243 "method": "keyring_file_add_key", 00:37:32.243 "params": { 00:37:32.243 "name": "key1", 00:37:32.243 "path": "/tmp/tmp.SMKouOxiEe" 00:37:32.243 } 00:37:32.243 } 00:37:32.243 ] 00:37:32.243 }, 00:37:32.243 { 00:37:32.243 "subsystem": "iobuf", 00:37:32.243 "config": [ 00:37:32.243 { 00:37:32.243 "method": "iobuf_set_options", 00:37:32.243 "params": { 00:37:32.243 "small_pool_count": 8192, 00:37:32.243 "large_pool_count": 1024, 00:37:32.243 "small_bufsize": 8192, 00:37:32.243 "large_bufsize": 135168 00:37:32.243 } 00:37:32.243 } 00:37:32.243 ] 00:37:32.243 }, 00:37:32.243 { 00:37:32.243 "subsystem": "sock", 00:37:32.243 "config": [ 00:37:32.243 { 00:37:32.243 "method": "sock_set_default_impl", 00:37:32.243 "params": { 00:37:32.243 "impl_name": "posix" 00:37:32.243 } 00:37:32.243 }, 00:37:32.243 { 00:37:32.243 "method": "sock_impl_set_options", 00:37:32.243 "params": { 00:37:32.243 "impl_name": "ssl", 00:37:32.243 "recv_buf_size": 4096, 00:37:32.243 "send_buf_size": 4096, 00:37:32.243 "enable_recv_pipe": true, 00:37:32.243 "enable_quickack": false, 00:37:32.243 "enable_placement_id": 0, 00:37:32.243 "enable_zerocopy_send_server": true, 00:37:32.243 "enable_zerocopy_send_client": false, 00:37:32.243 "zerocopy_threshold": 0, 00:37:32.243 "tls_version": 0, 00:37:32.243 "enable_ktls": false 00:37:32.243 } 00:37:32.243 }, 00:37:32.243 { 00:37:32.243 "method": "sock_impl_set_options", 00:37:32.243 "params": { 00:37:32.243 "impl_name": "posix", 00:37:32.243 "recv_buf_size": 2097152, 00:37:32.243 "send_buf_size": 2097152, 00:37:32.243 "enable_recv_pipe": true, 00:37:32.243 "enable_quickack": false, 00:37:32.243 "enable_placement_id": 0, 00:37:32.243 "enable_zerocopy_send_server": true, 00:37:32.243 "enable_zerocopy_send_client": false, 00:37:32.243 "zerocopy_threshold": 0, 00:37:32.243 "tls_version": 0, 00:37:32.243 "enable_ktls": false 00:37:32.243 } 00:37:32.243 } 00:37:32.243 ] 00:37:32.243 }, 00:37:32.243 { 00:37:32.243 "subsystem": "vmd", 00:37:32.243 "config": [] 00:37:32.243 }, 00:37:32.243 { 00:37:32.243 "subsystem": "accel", 00:37:32.243 "config": [ 00:37:32.243 { 00:37:32.243 "method": "accel_set_options", 00:37:32.243 "params": { 00:37:32.243 "small_cache_size": 128, 00:37:32.243 "large_cache_size": 16, 00:37:32.243 "task_count": 2048, 00:37:32.243 "sequence_count": 2048, 00:37:32.243 "buf_count": 2048 00:37:32.243 } 00:37:32.243 } 00:37:32.243 ] 00:37:32.243 }, 00:37:32.243 { 00:37:32.243 "subsystem": "bdev", 00:37:32.243 "config": [ 00:37:32.243 { 00:37:32.243 "method": "bdev_set_options", 00:37:32.243 "params": { 00:37:32.243 "bdev_io_pool_size": 65535, 00:37:32.243 "bdev_io_cache_size": 256, 00:37:32.243 "bdev_auto_examine": true, 00:37:32.244 "iobuf_small_cache_size": 128, 00:37:32.244 "iobuf_large_cache_size": 16 00:37:32.244 } 00:37:32.244 }, 00:37:32.244 { 00:37:32.244 "method": "bdev_raid_set_options", 00:37:32.244 "params": { 00:37:32.244 "process_window_size_kb": 1024, 00:37:32.244 "process_max_bandwidth_mb_sec": 0 00:37:32.244 } 00:37:32.244 }, 00:37:32.244 { 00:37:32.244 "method": "bdev_iscsi_set_options", 00:37:32.244 "params": { 00:37:32.244 "timeout_sec": 30 00:37:32.244 } 00:37:32.244 }, 00:37:32.244 { 00:37:32.244 "method": "bdev_nvme_set_options", 00:37:32.244 "params": { 00:37:32.244 "action_on_timeout": "none", 00:37:32.244 "timeout_us": 0, 00:37:32.244 "timeout_admin_us": 0, 00:37:32.244 "keep_alive_timeout_ms": 10000, 00:37:32.244 "arbitration_burst": 0, 00:37:32.244 "low_priority_weight": 0, 00:37:32.244 "medium_priority_weight": 0, 00:37:32.244 "high_priority_weight": 0, 00:37:32.244 "nvme_adminq_poll_period_us": 10000, 00:37:32.244 "nvme_ioq_poll_period_us": 0, 00:37:32.244 "io_queue_requests": 512, 00:37:32.244 "delay_cmd_submit": true, 00:37:32.244 "transport_retry_count": 4, 00:37:32.244 "bdev_retry_count": 3, 00:37:32.244 "transport_ack_timeout": 0, 00:37:32.244 "ctrlr_loss_timeout_sec": 0, 00:37:32.244 "reconnect_delay_sec": 0, 00:37:32.244 "fast_io_fail_timeout_sec": 0, 00:37:32.244 "disable_auto_failback": false, 00:37:32.244 "generate_uuids": false, 00:37:32.244 "transport_tos": 0, 00:37:32.244 "nvme_error_stat": false, 00:37:32.244 "rdma_srq_size": 0, 00:37:32.244 "io_path_stat": false, 00:37:32.244 "allow_accel_sequence": false, 00:37:32.244 "rdma_max_cq_size": 0, 00:37:32.244 "rdma_cm_event_timeout_ms": 0, 00:37:32.244 "dhchap_digests": [ 00:37:32.244 "sha256", 00:37:32.244 "sha384", 00:37:32.244 "sha512" 00:37:32.244 ], 00:37:32.244 "dhchap_dhgroups": [ 00:37:32.244 "null", 00:37:32.244 "ffdhe2048", 00:37:32.244 "ffdhe3072", 00:37:32.244 "ffdhe4096", 00:37:32.244 "ffdhe6144", 00:37:32.244 "ffdhe8192" 00:37:32.244 ] 00:37:32.244 } 00:37:32.244 }, 00:37:32.244 { 00:37:32.244 "method": "bdev_nvme_attach_controller", 00:37:32.244 "params": { 00:37:32.244 "name": "nvme0", 00:37:32.244 "trtype": "TCP", 00:37:32.244 "adrfam": "IPv4", 00:37:32.244 "traddr": "127.0.0.1", 00:37:32.244 "trsvcid": "4420", 00:37:32.244 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:32.244 "prchk_reftag": false, 00:37:32.244 "prchk_guard": false, 00:37:32.244 "ctrlr_loss_timeout_sec": 0, 00:37:32.244 "reconnect_delay_sec": 0, 00:37:32.244 "fast_io_fail_timeout_sec": 0, 00:37:32.244 "psk": "key0", 00:37:32.244 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:32.244 "hdgst": false, 00:37:32.244 "ddgst": false, 00:37:32.244 "multipath": "multipath" 00:37:32.244 } 00:37:32.244 }, 00:37:32.244 { 00:37:32.244 "method": "bdev_nvme_set_hotplug", 00:37:32.244 "params": { 00:37:32.244 "period_us": 100000, 00:37:32.244 "enable": false 00:37:32.244 } 00:37:32.244 }, 00:37:32.244 { 00:37:32.244 "method": "bdev_wait_for_examine" 00:37:32.244 } 00:37:32.244 ] 00:37:32.244 }, 00:37:32.244 { 00:37:32.244 "subsystem": "nbd", 00:37:32.244 "config": [] 00:37:32.244 } 00:37:32.244 ] 00:37:32.244 }' 00:37:32.504 [2024-11-04 12:42:06.832084] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:37:32.504 [2024-11-04 12:42:06.832144] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1971520 ] 00:37:32.504 [2024-11-04 12:42:06.906400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:32.504 [2024-11-04 12:42:06.935172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:32.764 [2024-11-04 12:42:07.077560] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:33.334 12:42:07 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:33.334 12:42:07 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:33.334 12:42:07 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:37:33.334 12:42:07 keyring_file -- keyring/file.sh@121 -- # jq length 00:37:33.334 12:42:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:33.334 12:42:07 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:33.334 12:42:07 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:37:33.334 12:42:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:33.334 12:42:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:33.334 12:42:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:33.334 12:42:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:33.334 12:42:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:33.593 12:42:07 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:37:33.593 12:42:07 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:37:33.593 12:42:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:33.593 12:42:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:33.593 12:42:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:33.593 12:42:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:33.593 12:42:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:33.593 12:42:08 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:37:33.593 12:42:08 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:37:33.593 12:42:08 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:37:33.593 12:42:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:33.853 12:42:08 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:37:33.853 12:42:08 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:33.853 12:42:08 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.cxqWpuZv83 /tmp/tmp.SMKouOxiEe 00:37:33.853 12:42:08 keyring_file -- keyring/file.sh@20 -- # killprocess 1971520 00:37:33.853 12:42:08 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1971520 ']' 00:37:33.853 12:42:08 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1971520 00:37:33.853 12:42:08 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:33.853 12:42:08 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:33.853 12:42:08 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1971520 00:37:33.853 12:42:08 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:33.853 12:42:08 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:33.853 12:42:08 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1971520' 00:37:33.853 killing process with pid 1971520 00:37:33.853 12:42:08 keyring_file -- common/autotest_common.sh@969 -- # kill 1971520 00:37:33.853 Received shutdown signal, test time was about 1.000000 seconds 00:37:33.853 00:37:33.853 Latency(us) 00:37:33.853 [2024-11-04T11:42:08.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:33.853 [2024-11-04T11:42:08.423Z] =================================================================================================================== 00:37:33.853 [2024-11-04T11:42:08.423Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:33.853 12:42:08 keyring_file -- common/autotest_common.sh@974 -- # wait 1971520 00:37:34.113 12:42:08 keyring_file -- keyring/file.sh@21 -- # killprocess 1969515 00:37:34.114 12:42:08 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1969515 ']' 00:37:34.114 12:42:08 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1969515 00:37:34.114 12:42:08 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:34.114 12:42:08 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:34.114 12:42:08 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1969515 00:37:34.114 12:42:08 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:34.114 12:42:08 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:34.114 12:42:08 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1969515' 00:37:34.114 killing process with pid 1969515 00:37:34.114 12:42:08 keyring_file -- common/autotest_common.sh@969 -- # kill 1969515 00:37:34.114 12:42:08 keyring_file -- common/autotest_common.sh@974 -- # wait 1969515 00:37:34.374 00:37:34.374 real 0m11.759s 00:37:34.374 user 0m28.267s 00:37:34.374 sys 0m2.633s 00:37:34.374 12:42:08 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:34.374 12:42:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:34.374 ************************************ 00:37:34.374 END TEST keyring_file 00:37:34.374 ************************************ 00:37:34.374 12:42:08 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:37:34.374 12:42:08 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:34.374 12:42:08 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:34.374 12:42:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:34.374 12:42:08 -- common/autotest_common.sh@10 -- # set +x 00:37:34.374 ************************************ 00:37:34.374 START TEST keyring_linux 00:37:34.374 ************************************ 00:37:34.374 12:42:08 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:34.374 Joined session keyring: 800756201 00:37:34.374 * Looking for test storage... 00:37:34.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:34.374 12:42:08 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:34.374 12:42:08 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:37:34.374 12:42:08 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:34.636 12:42:08 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@345 -- # : 1 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@368 -- # return 0 00:37:34.636 12:42:09 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:34.636 12:42:09 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:34.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.636 --rc genhtml_branch_coverage=1 00:37:34.636 --rc genhtml_function_coverage=1 00:37:34.636 --rc genhtml_legend=1 00:37:34.636 --rc geninfo_all_blocks=1 00:37:34.636 --rc geninfo_unexecuted_blocks=1 00:37:34.636 00:37:34.636 ' 00:37:34.636 12:42:09 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:34.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.636 --rc genhtml_branch_coverage=1 00:37:34.636 --rc genhtml_function_coverage=1 00:37:34.636 --rc genhtml_legend=1 00:37:34.636 --rc geninfo_all_blocks=1 00:37:34.636 --rc geninfo_unexecuted_blocks=1 00:37:34.636 00:37:34.636 ' 00:37:34.636 12:42:09 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:34.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.636 --rc genhtml_branch_coverage=1 00:37:34.636 --rc genhtml_function_coverage=1 00:37:34.636 --rc genhtml_legend=1 00:37:34.636 --rc geninfo_all_blocks=1 00:37:34.636 --rc geninfo_unexecuted_blocks=1 00:37:34.636 00:37:34.636 ' 00:37:34.636 12:42:09 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:34.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.636 --rc genhtml_branch_coverage=1 00:37:34.636 --rc genhtml_function_coverage=1 00:37:34.636 --rc genhtml_legend=1 00:37:34.636 --rc geninfo_all_blocks=1 00:37:34.636 --rc geninfo_unexecuted_blocks=1 00:37:34.636 00:37:34.636 ' 00:37:34.636 12:42:09 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:34.636 12:42:09 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:34.636 12:42:09 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:34.636 12:42:09 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.636 12:42:09 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.636 12:42:09 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.636 12:42:09 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:34.636 12:42:09 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:34.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:34.636 12:42:09 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:34.636 12:42:09 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:34.636 12:42:09 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:34.636 12:42:09 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:34.636 12:42:09 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:34.636 12:42:09 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:34.636 12:42:09 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:34.636 12:42:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:34.636 12:42:09 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:34.636 12:42:09 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:34.636 12:42:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:34.636 12:42:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:34.636 12:42:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:34.636 12:42:09 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:34.637 12:42:09 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:37:34.637 12:42:09 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:34.637 12:42:09 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:34.637 12:42:09 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:37:34.637 12:42:09 keyring_linux -- nvmf/common.sh@731 -- # python - 00:37:34.637 12:42:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:34.637 12:42:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:34.637 /tmp/:spdk-test:key0 00:37:34.637 12:42:09 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:34.637 12:42:09 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:34.637 12:42:09 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:34.637 12:42:09 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:34.637 12:42:09 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:34.637 12:42:09 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:34.637 12:42:09 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:34.637 12:42:09 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:34.637 12:42:09 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:37:34.637 12:42:09 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:34.637 12:42:09 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:37:34.637 12:42:09 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:37:34.637 12:42:09 keyring_linux -- nvmf/common.sh@731 -- # python - 00:37:34.637 12:42:09 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:34.637 12:42:09 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:34.637 /tmp/:spdk-test:key1 00:37:34.637 12:42:09 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1972129 00:37:34.637 12:42:09 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1972129 00:37:34.637 12:42:09 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:34.637 12:42:09 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1972129 ']' 00:37:34.637 12:42:09 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:34.637 12:42:09 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:34.637 12:42:09 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:34.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:34.637 12:42:09 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:34.637 12:42:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:34.897 [2024-11-04 12:42:09.226724] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:37:34.897 [2024-11-04 12:42:09.226814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972129 ] 00:37:34.897 [2024-11-04 12:42:09.291661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:34.897 [2024-11-04 12:42:09.335689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:35.466 12:42:09 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:35.466 12:42:09 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:37:35.466 12:42:09 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:35.466 12:42:09 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.466 12:42:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:35.466 [2024-11-04 12:42:09.994423] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:35.466 null0 00:37:35.466 [2024-11-04 12:42:10.026460] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:35.466 [2024-11-04 12:42:10.026859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:35.725 12:42:10 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.725 12:42:10 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:35.725 87283416 00:37:35.726 12:42:10 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:35.726 948730070 00:37:35.726 12:42:10 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1972594 00:37:35.726 12:42:10 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1972594 /var/tmp/bperf.sock 00:37:35.726 12:42:10 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:35.726 12:42:10 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1972594 ']' 00:37:35.726 12:42:10 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:35.726 12:42:10 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:35.726 12:42:10 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:35.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:35.726 12:42:10 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:35.726 12:42:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:35.726 [2024-11-04 12:42:10.105577] Starting SPDK v25.01-pre git sha1 c3ade7c9c / DPDK 24.03.0 initialization... 00:37:35.726 [2024-11-04 12:42:10.105629] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972594 ] 00:37:35.726 [2024-11-04 12:42:10.179889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:35.726 [2024-11-04 12:42:10.209679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:36.664 12:42:10 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:36.664 12:42:10 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:37:36.664 12:42:10 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:36.664 12:42:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:36.664 12:42:11 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:36.664 12:42:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:36.925 12:42:11 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:36.925 12:42:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:36.925 [2024-11-04 12:42:11.405651] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:36.925 nvme0n1 00:37:37.185 12:42:11 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:37.185 12:42:11 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:37.185 12:42:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:37.185 12:42:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:37.185 12:42:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:37.185 12:42:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.185 12:42:11 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:37.185 12:42:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:37.185 12:42:11 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:37.185 12:42:11 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:37.185 12:42:11 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:37.185 12:42:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.185 12:42:11 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:37.445 12:42:11 keyring_linux -- keyring/linux.sh@25 -- # sn=87283416 00:37:37.445 12:42:11 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:37.445 12:42:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:37.445 12:42:11 keyring_linux -- keyring/linux.sh@26 -- # [[ 87283416 == \8\7\2\8\3\4\1\6 ]] 00:37:37.445 12:42:11 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 87283416 00:37:37.445 12:42:11 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:37.445 12:42:11 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:37.445 Running I/O for 1 seconds... 00:37:38.384 16417.00 IOPS, 64.13 MiB/s 00:37:38.384 Latency(us) 00:37:38.384 [2024-11-04T11:42:12.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:38.384 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:38.384 nvme0n1 : 1.01 16417.50 64.13 0.00 0.00 7762.95 6826.67 15728.64 00:37:38.384 [2024-11-04T11:42:12.954Z] =================================================================================================================== 00:37:38.384 [2024-11-04T11:42:12.954Z] Total : 16417.50 64.13 0.00 0.00 7762.95 6826.67 15728.64 00:37:38.384 { 00:37:38.384 "results": [ 00:37:38.384 { 00:37:38.384 "job": "nvme0n1", 00:37:38.384 "core_mask": "0x2", 00:37:38.384 "workload": "randread", 00:37:38.384 "status": "finished", 00:37:38.384 "queue_depth": 128, 00:37:38.384 "io_size": 4096, 00:37:38.384 "runtime": 1.007766, 00:37:38.384 "iops": 16417.50168193807, 00:37:38.384 "mibps": 64.13086594507058, 00:37:38.384 "io_failed": 0, 00:37:38.384 "io_timeout": 0, 00:37:38.384 "avg_latency_us": 7762.945340989222, 00:37:38.384 "min_latency_us": 6826.666666666667, 00:37:38.384 "max_latency_us": 15728.64 00:37:38.384 } 00:37:38.384 ], 00:37:38.384 "core_count": 1 00:37:38.384 } 00:37:38.644 12:42:12 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:38.644 12:42:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:38.644 12:42:13 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:38.644 12:42:13 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:38.644 12:42:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:38.644 12:42:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:38.644 12:42:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:38.644 12:42:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:38.904 12:42:13 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:38.904 12:42:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:38.904 12:42:13 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:38.904 12:42:13 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:38.904 12:42:13 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:37:38.904 12:42:13 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:38.904 12:42:13 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:38.904 12:42:13 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:38.904 12:42:13 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:38.904 12:42:13 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:38.904 12:42:13 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:38.904 12:42:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:39.164 [2024-11-04 12:42:13.476804] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:39.164 [2024-11-04 12:42:13.477585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203d550 (107): Transport endpoint is not connected 00:37:39.164 [2024-11-04 12:42:13.478581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203d550 (9): Bad file descriptor 00:37:39.164 [2024-11-04 12:42:13.479582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:39.164 [2024-11-04 12:42:13.479595] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:39.164 [2024-11-04 12:42:13.479601] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:39.164 [2024-11-04 12:42:13.479607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:39.164 request: 00:37:39.164 { 00:37:39.164 "name": "nvme0", 00:37:39.164 "trtype": "tcp", 00:37:39.164 "traddr": "127.0.0.1", 00:37:39.164 "adrfam": "ipv4", 00:37:39.164 "trsvcid": "4420", 00:37:39.164 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:39.164 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:39.164 "prchk_reftag": false, 00:37:39.164 "prchk_guard": false, 00:37:39.164 "hdgst": false, 00:37:39.164 "ddgst": false, 00:37:39.164 "psk": ":spdk-test:key1", 00:37:39.164 "allow_unrecognized_csi": false, 00:37:39.164 "method": "bdev_nvme_attach_controller", 00:37:39.164 "req_id": 1 00:37:39.164 } 00:37:39.164 Got JSON-RPC error response 00:37:39.164 response: 00:37:39.164 { 00:37:39.164 "code": -5, 00:37:39.164 "message": "Input/output error" 00:37:39.164 } 00:37:39.164 12:42:13 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:37:39.164 12:42:13 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:39.164 12:42:13 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:39.164 12:42:13 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:39.164 12:42:13 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:39.164 12:42:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:39.164 12:42:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:39.164 12:42:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:39.164 12:42:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:39.164 12:42:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:39.164 12:42:13 keyring_linux -- keyring/linux.sh@33 -- # sn=87283416 00:37:39.164 12:42:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 87283416 00:37:39.164 1 links removed 00:37:39.164 12:42:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:39.164 12:42:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:39.164 12:42:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:39.164 12:42:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:39.164 12:42:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:39.164 12:42:13 keyring_linux -- keyring/linux.sh@33 -- # sn=948730070 00:37:39.164 12:42:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 948730070 00:37:39.164 1 links removed 00:37:39.164 12:42:13 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1972594 00:37:39.164 12:42:13 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1972594 ']' 00:37:39.164 12:42:13 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1972594 00:37:39.164 12:42:13 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:37:39.164 12:42:13 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:39.164 12:42:13 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1972594 00:37:39.164 12:42:13 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:39.164 12:42:13 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:39.164 12:42:13 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1972594' 00:37:39.164 killing process with pid 1972594 00:37:39.164 12:42:13 keyring_linux -- common/autotest_common.sh@969 -- # kill 1972594 00:37:39.164 Received shutdown signal, test time was about 1.000000 seconds 00:37:39.164 00:37:39.164 Latency(us) 00:37:39.164 [2024-11-04T11:42:13.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:39.164 [2024-11-04T11:42:13.734Z] =================================================================================================================== 00:37:39.164 [2024-11-04T11:42:13.734Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:39.164 12:42:13 keyring_linux -- common/autotest_common.sh@974 -- # wait 1972594 00:37:39.164 12:42:13 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1972129 00:37:39.164 12:42:13 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1972129 ']' 00:37:39.164 12:42:13 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1972129 00:37:39.164 12:42:13 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:37:39.165 12:42:13 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:39.165 12:42:13 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1972129 00:37:39.424 12:42:13 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:39.424 12:42:13 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:39.424 12:42:13 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1972129' 00:37:39.424 killing process with pid 1972129 00:37:39.424 12:42:13 keyring_linux -- common/autotest_common.sh@969 -- # kill 1972129 00:37:39.424 12:42:13 keyring_linux -- common/autotest_common.sh@974 -- # wait 1972129 00:37:39.424 00:37:39.424 real 0m5.138s 00:37:39.424 user 0m9.460s 00:37:39.424 sys 0m1.395s 00:37:39.424 12:42:13 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:39.424 12:42:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:39.424 ************************************ 00:37:39.424 END TEST keyring_linux 00:37:39.424 ************************************ 00:37:39.684 12:42:13 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:37:39.684 12:42:13 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:39.684 12:42:13 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:39.684 12:42:13 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:37:39.684 12:42:13 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:37:39.684 12:42:13 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:37:39.684 12:42:13 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:39.684 12:42:13 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:39.684 12:42:13 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:39.684 12:42:13 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:37:39.684 12:42:13 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:39.684 12:42:13 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:37:39.684 12:42:13 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:39.684 12:42:13 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:39.684 12:42:13 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:37:39.684 12:42:13 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:37:39.684 12:42:13 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:37:39.684 12:42:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:39.684 12:42:13 -- common/autotest_common.sh@10 -- # set +x 00:37:39.684 12:42:14 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:37:39.684 12:42:14 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:39.684 12:42:14 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:39.684 12:42:14 -- common/autotest_common.sh@10 -- # set +x 00:37:47.828 INFO: APP EXITING 00:37:47.828 INFO: killing all VMs 00:37:47.828 INFO: killing vhost app 00:37:47.828 INFO: EXIT DONE 00:37:50.373 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:37:50.373 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:37:50.373 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:37:50.373 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:37:50.373 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:37:50.373 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:37:50.373 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:37:50.373 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:37:50.373 0000:65:00.0 (144d a80a): Already using the nvme driver 00:37:50.373 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:37:50.373 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:37:50.373 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:37:50.373 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:37:50.373 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:37:50.634 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:37:50.634 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:37:50.634 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:37:53.934 Cleaning 00:37:53.934 Removing: /var/run/dpdk/spdk0/config 00:37:53.934 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:53.934 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:53.934 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:53.934 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:53.934 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:53.934 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:53.934 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:53.934 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:53.934 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:53.934 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:53.934 Removing: /var/run/dpdk/spdk1/config 00:37:53.934 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:53.934 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:53.934 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:53.934 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:53.934 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:53.934 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:53.934 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:53.934 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:53.934 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:53.934 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:53.934 Removing: /var/run/dpdk/spdk2/config 00:37:53.934 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:53.934 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:53.934 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:53.934 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:53.934 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:53.934 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:53.934 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:53.934 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:53.934 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:53.934 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:53.934 Removing: /var/run/dpdk/spdk3/config 00:37:53.934 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:53.934 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:53.934 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:53.934 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:53.934 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:53.934 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:53.934 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:53.934 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:53.934 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:53.934 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:53.934 Removing: /var/run/dpdk/spdk4/config 00:37:53.934 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:53.934 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:53.934 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:53.934 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:53.934 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:53.934 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:53.934 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:53.935 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:53.935 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:53.935 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:53.935 Removing: /dev/shm/bdev_svc_trace.1 00:37:53.935 Removing: /dev/shm/nvmf_trace.0 00:37:53.935 Removing: /dev/shm/spdk_tgt_trace.pid1403174 00:37:53.935 Removing: /var/run/dpdk/spdk0 00:37:53.935 Removing: /var/run/dpdk/spdk1 00:37:53.935 Removing: /var/run/dpdk/spdk2 00:37:53.935 Removing: /var/run/dpdk/spdk3 00:37:53.935 Removing: /var/run/dpdk/spdk4 00:37:53.935 Removing: /var/run/dpdk/spdk_pid1401566 00:37:53.935 Removing: /var/run/dpdk/spdk_pid1403174 00:37:53.935 Removing: /var/run/dpdk/spdk_pid1403838 00:37:53.935 Removing: /var/run/dpdk/spdk_pid1405062 00:37:53.935 Removing: /var/run/dpdk/spdk_pid1405231 00:37:53.935 Removing: /var/run/dpdk/spdk_pid1406476 00:37:53.935 Removing: /var/run/dpdk/spdk_pid1406485 00:37:53.935 Removing: /var/run/dpdk/spdk_pid1406923 00:37:53.935 Removing: /var/run/dpdk/spdk_pid1408010 00:37:53.935 Removing: /var/run/dpdk/spdk_pid1408656 00:37:53.935 Removing: /var/run/dpdk/spdk_pid1409048 00:37:53.935 Removing: /var/run/dpdk/spdk_pid1409444 00:37:53.935 Removing: /var/run/dpdk/spdk_pid1409860 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1410264 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1410855 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1411219 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1411514 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1412627 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1416140 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1416508 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1416886 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1416945 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1417534 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1417611 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1418075 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1418321 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1418680 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1418685 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1419010 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1419065 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1419642 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1419861 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1420268 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1424952 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1430167 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1442254 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1442954 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1448330 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1448680 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1453764 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1460951 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1464670 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1477031 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1488077 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1490108 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1491352 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1512117 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1517098 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1574136 00:37:54.248 Removing: /var/run/dpdk/spdk_pid1580619 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1587633 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1594912 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1595001 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1596012 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1597017 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1598029 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1598702 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1598705 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1599041 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1599051 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1599065 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1600144 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1601177 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1602259 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1602900 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1603031 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1603268 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1604500 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1605902 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1615900 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1652023 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1657447 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1659528 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1662152 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1662343 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1662367 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1662691 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1663168 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1665427 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1666209 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1666874 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1669264 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1669966 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1670850 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1675737 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1682441 00:37:54.249 Removing: /var/run/dpdk/spdk_pid1682442 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1682443 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1687132 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1697089 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1701934 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1709485 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1711384 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1713086 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1714862 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1720627 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1725571 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1734490 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1734585 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1739689 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1739830 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1740153 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1740678 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1740793 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1746194 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1746913 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1752199 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1755219 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1761603 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1768544 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1778628 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1786963 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1786965 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1810141 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1810833 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1811689 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1812198 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1813243 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1813838 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1814607 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1815403 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1820908 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1821246 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1828436 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1828661 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1835154 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1840282 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1851822 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1852496 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1857709 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1858065 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1862933 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1869929 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1873384 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1885479 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1896134 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1898132 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1899145 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1918522 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1923236 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1926965 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1934312 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1934338 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1940183 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1942469 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1944890 00:37:54.510 Removing: /var/run/dpdk/spdk_pid1946078 00:37:54.772 Removing: /var/run/dpdk/spdk_pid1948601 00:37:54.772 Removing: /var/run/dpdk/spdk_pid1949912 00:37:54.772 Removing: /var/run/dpdk/spdk_pid1959733 00:37:54.772 Removing: /var/run/dpdk/spdk_pid1960395 00:37:54.772 Removing: /var/run/dpdk/spdk_pid1961025 00:37:54.772 Removing: /var/run/dpdk/spdk_pid1963750 00:37:54.772 Removing: /var/run/dpdk/spdk_pid1964369 00:37:54.772 Removing: /var/run/dpdk/spdk_pid1965042 00:37:54.772 Removing: /var/run/dpdk/spdk_pid1969515 00:37:54.772 Removing: /var/run/dpdk/spdk_pid1969606 00:37:54.772 Removing: /var/run/dpdk/spdk_pid1971520 00:37:54.772 Removing: /var/run/dpdk/spdk_pid1972129 00:37:54.772 Removing: /var/run/dpdk/spdk_pid1972594 00:37:54.772 Clean 00:37:54.772 12:42:29 -- common/autotest_common.sh@1451 -- # return 0 00:37:54.772 12:42:29 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:37:54.772 12:42:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:54.772 12:42:29 -- common/autotest_common.sh@10 -- # set +x 00:37:54.772 12:42:29 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:37:54.772 12:42:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:54.772 12:42:29 -- common/autotest_common.sh@10 -- # set +x 00:37:54.772 12:42:29 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:54.772 12:42:29 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:54.772 12:42:29 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:54.772 12:42:29 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:37:54.772 12:42:29 -- spdk/autotest.sh@394 -- # hostname 00:37:54.772 12:42:29 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:55.034 geninfo: WARNING: invalid characters removed from testname! 00:38:21.697 12:42:54 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:24.285 12:42:58 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:25.670 12:42:59 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:27.056 12:43:01 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:29.602 12:43:03 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:30.986 12:43:05 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:32.900 12:43:07 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:32.900 12:43:07 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:38:32.900 12:43:07 -- common/autotest_common.sh@1691 -- $ lcov --version 00:38:32.900 12:43:07 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:38:32.900 12:43:07 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:38:32.900 12:43:07 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:38:32.900 12:43:07 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:38:32.900 12:43:07 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:38:32.900 12:43:07 -- scripts/common.sh@336 -- $ IFS=.-: 00:38:32.900 12:43:07 -- scripts/common.sh@336 -- $ read -ra ver1 00:38:32.900 12:43:07 -- scripts/common.sh@337 -- $ IFS=.-: 00:38:32.900 12:43:07 -- scripts/common.sh@337 -- $ read -ra ver2 00:38:32.900 12:43:07 -- scripts/common.sh@338 -- $ local 'op=<' 00:38:32.900 12:43:07 -- scripts/common.sh@340 -- $ ver1_l=2 00:38:32.900 12:43:07 -- scripts/common.sh@341 -- $ ver2_l=1 00:38:32.900 12:43:07 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:38:32.900 12:43:07 -- scripts/common.sh@344 -- $ case "$op" in 00:38:32.900 12:43:07 -- scripts/common.sh@345 -- $ : 1 00:38:32.900 12:43:07 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:38:32.900 12:43:07 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:32.900 12:43:07 -- scripts/common.sh@365 -- $ decimal 1 00:38:32.900 12:43:07 -- scripts/common.sh@353 -- $ local d=1 00:38:32.900 12:43:07 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:38:32.900 12:43:07 -- scripts/common.sh@355 -- $ echo 1 00:38:32.900 12:43:07 -- scripts/common.sh@365 -- $ ver1[v]=1 00:38:32.900 12:43:07 -- scripts/common.sh@366 -- $ decimal 2 00:38:32.900 12:43:07 -- scripts/common.sh@353 -- $ local d=2 00:38:32.900 12:43:07 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:38:32.900 12:43:07 -- scripts/common.sh@355 -- $ echo 2 00:38:32.900 12:43:07 -- scripts/common.sh@366 -- $ ver2[v]=2 00:38:32.900 12:43:07 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:38:32.900 12:43:07 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:38:32.900 12:43:07 -- scripts/common.sh@368 -- $ return 0 00:38:32.900 12:43:07 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:32.900 12:43:07 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:38:32.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.900 --rc genhtml_branch_coverage=1 00:38:32.900 --rc genhtml_function_coverage=1 00:38:32.900 --rc genhtml_legend=1 00:38:32.900 --rc geninfo_all_blocks=1 00:38:32.900 --rc geninfo_unexecuted_blocks=1 00:38:32.900 00:38:32.900 ' 00:38:32.900 12:43:07 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:38:32.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.900 --rc genhtml_branch_coverage=1 00:38:32.900 --rc genhtml_function_coverage=1 00:38:32.900 --rc genhtml_legend=1 00:38:32.900 --rc geninfo_all_blocks=1 00:38:32.900 --rc geninfo_unexecuted_blocks=1 00:38:32.900 00:38:32.900 ' 00:38:32.900 12:43:07 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:38:32.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.900 --rc genhtml_branch_coverage=1 00:38:32.900 --rc genhtml_function_coverage=1 00:38:32.900 --rc genhtml_legend=1 00:38:32.900 --rc geninfo_all_blocks=1 00:38:32.900 --rc geninfo_unexecuted_blocks=1 00:38:32.900 00:38:32.900 ' 00:38:32.900 12:43:07 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:38:32.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:32.900 --rc genhtml_branch_coverage=1 00:38:32.900 --rc genhtml_function_coverage=1 00:38:32.900 --rc genhtml_legend=1 00:38:32.900 --rc geninfo_all_blocks=1 00:38:32.900 --rc geninfo_unexecuted_blocks=1 00:38:32.900 00:38:32.900 ' 00:38:32.900 12:43:07 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:32.900 12:43:07 -- scripts/common.sh@15 -- $ shopt -s extglob 00:38:32.900 12:43:07 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:32.900 12:43:07 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:32.900 12:43:07 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:32.900 12:43:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.900 12:43:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.900 12:43:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.900 12:43:07 -- paths/export.sh@5 -- $ export PATH 00:38:32.900 12:43:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.900 12:43:07 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:32.900 12:43:07 -- common/autobuild_common.sh@486 -- $ date +%s 00:38:32.900 12:43:07 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730720587.XXXXXX 00:38:32.900 12:43:07 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730720587.TA0ELF 00:38:32.900 12:43:07 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:38:32.900 12:43:07 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:38:32.900 12:43:07 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:38:32.900 12:43:07 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:32.900 12:43:07 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:32.900 12:43:07 -- common/autobuild_common.sh@502 -- $ get_config_params 00:38:32.900 12:43:07 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:38:32.900 12:43:07 -- common/autotest_common.sh@10 -- $ set +x 00:38:32.900 12:43:07 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:38:32.900 12:43:07 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:38:32.900 12:43:07 -- pm/common@17 -- $ local monitor 00:38:32.900 12:43:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:32.900 12:43:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:32.900 12:43:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:32.900 12:43:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:32.900 12:43:07 -- pm/common@21 -- $ date +%s 00:38:32.900 12:43:07 -- pm/common@21 -- $ date +%s 00:38:32.900 12:43:07 -- pm/common@25 -- $ sleep 1 00:38:32.900 12:43:07 -- pm/common@21 -- $ date +%s 00:38:32.900 12:43:07 -- pm/common@21 -- $ date +%s 00:38:32.900 12:43:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730720587 00:38:32.900 12:43:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730720587 00:38:32.900 12:43:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730720587 00:38:32.900 12:43:07 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730720587 00:38:32.900 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730720587_collect-cpu-load.pm.log 00:38:32.900 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730720587_collect-vmstat.pm.log 00:38:32.900 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730720587_collect-cpu-temp.pm.log 00:38:32.900 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730720587_collect-bmc-pm.bmc.pm.log 00:38:33.844 12:43:08 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:38:33.844 12:43:08 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:38:33.844 12:43:08 -- spdk/autopackage.sh@14 -- $ timing_finish 00:38:33.844 12:43:08 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:33.844 12:43:08 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:33.844 12:43:08 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:33.844 12:43:08 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:33.844 12:43:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:33.844 12:43:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:33.844 12:43:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:33.844 12:43:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:33.844 12:43:08 -- pm/common@44 -- $ pid=1985347 00:38:33.844 12:43:08 -- pm/common@50 -- $ kill -TERM 1985347 00:38:33.844 12:43:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:33.844 12:43:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:33.844 12:43:08 -- pm/common@44 -- $ pid=1985348 00:38:33.844 12:43:08 -- pm/common@50 -- $ kill -TERM 1985348 00:38:33.844 12:43:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:33.844 12:43:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:33.844 12:43:08 -- pm/common@44 -- $ pid=1985350 00:38:33.844 12:43:08 -- pm/common@50 -- $ kill -TERM 1985350 00:38:33.844 12:43:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:33.844 12:43:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:33.844 12:43:08 -- pm/common@44 -- $ pid=1985373 00:38:33.844 12:43:08 -- pm/common@50 -- $ sudo -E kill -TERM 1985373 00:38:33.844 + [[ -n 1316787 ]] 00:38:33.844 + sudo kill 1316787 00:38:33.856 [Pipeline] } 00:38:33.872 [Pipeline] // stage 00:38:33.878 [Pipeline] } 00:38:33.892 [Pipeline] // timeout 00:38:33.897 [Pipeline] } 00:38:33.913 [Pipeline] // catchError 00:38:33.918 [Pipeline] } 00:38:33.933 [Pipeline] // wrap 00:38:33.939 [Pipeline] } 00:38:33.956 [Pipeline] // catchError 00:38:33.966 [Pipeline] stage 00:38:33.968 [Pipeline] { (Epilogue) 00:38:33.982 [Pipeline] catchError 00:38:33.983 [Pipeline] { 00:38:33.997 [Pipeline] echo 00:38:33.999 Cleanup processes 00:38:34.006 [Pipeline] sh 00:38:34.298 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:34.298 1985489 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:34.298 1986045 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:34.313 [Pipeline] sh 00:38:34.604 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:34.604 ++ grep -v 'sudo pgrep' 00:38:34.604 ++ awk '{print $1}' 00:38:34.604 + sudo kill -9 1985489 00:38:34.616 [Pipeline] sh 00:38:34.905 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:47.152 [Pipeline] sh 00:38:47.442 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:47.443 Artifacts sizes are good 00:38:47.458 [Pipeline] archiveArtifacts 00:38:47.466 Archiving artifacts 00:38:47.605 [Pipeline] sh 00:38:47.894 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:47.908 [Pipeline] cleanWs 00:38:47.918 [WS-CLEANUP] Deleting project workspace... 00:38:47.918 [WS-CLEANUP] Deferred wipeout is used... 00:38:47.927 [WS-CLEANUP] done 00:38:47.929 [Pipeline] } 00:38:47.943 [Pipeline] // catchError 00:38:47.954 [Pipeline] sh 00:38:48.268 + logger -p user.info -t JENKINS-CI 00:38:48.277 [Pipeline] } 00:38:48.291 [Pipeline] // stage 00:38:48.295 [Pipeline] } 00:38:48.308 [Pipeline] // node 00:38:48.313 [Pipeline] End of Pipeline 00:38:48.344 Finished: SUCCESS